Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance #218

Closed
piaohai opened this issue Aug 14, 2018 · 8 comments
Closed

performance #218

piaohai opened this issue Aug 14, 2018 · 8 comments
Labels
kind/question Please use `discussion` to ask questions instead wontfix

Comments

@piaohai
Copy link

piaohai commented Aug 14, 2018

great job, i'm very interesting in this block project, i want to find some block storage for mysql storage system

it some performance about IOPS compare with ceph? @yasker thanks.

@yasker yasker added the kind/question Please use `discussion` to ask questions instead label Aug 14, 2018
@yasker
Copy link
Member

yasker commented Aug 14, 2018

We haven't done much performance test recently. And we haven't start tuning performance at all. Our previous preliminary benchmark shows a decent performance (at least to our satisfaction) without tuning, with a decent network and SSD.

@AmreeshTyagi
Copy link

AmreeshTyagi commented Nov 11, 2018

@yasker Thanks for giving such nice tool to manage volumes.

Though, I tried to test performance of MySQL db using longhorn storageclass & observed huge performance issue then running a MySQL instance without PV or running MySQL inside plain docker container or running it on my local machine.

I didn't want to spend so much time on this, so just created one simple table & stored proc to insert some test data.

CREATE TABLE student (
id int(11) NOT NULL AUTO_INCREMENT,
student varchar(50) DEFAULT NULL,
age int(11) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=12501 DEFAULT CHARSET=latin1;

DELIMITER $$
CREATE PROCEDURE populate(in num int)
begin
declare i int default 0;
while i < num do
insert into student (student,age) values (concat('student',i),20);
set i = i + 1;
end while;
end$$
DELIMITER ;

Query executed from phpmyadmin on all 3 instances:
call populate (500);

  1. For local & docker instance with volume & without volume option:

Time taken to execute query: Avg - 0.750 sec

  1. For MySQL instance installed on Rancher Kubernetes cluster from Helm chart without any PV (not useful at all on production):

Time taken to execute query: Avg -0.840 sec

  1. For MySQL instance installed on Rancher Kubernetes cluster from Helm chart using longhorn volume:

Time taken to execute query: Avg - 8.939 sec

After spending my 1 day, I am still trying to solve this problem using local-storage as I see no impact of running database on Kubernetes (confirmed from 2nd test) :). Just for now, I can live without managing database volumes using longhorn & will setup some cron jobs to take regular backups of local volumes to NFS.

Please let me know, if I am doing something wrong with longhorn. Have you ever faced such issue or tried something similar?

@yasker
Copy link
Member

yasker commented Nov 11, 2018

@AmreeshTyagi Thanks for the report.

I think your test case is IOPS intensive, which we are aware that there is plenty of room for improvement.

With Longhorn, you need to have a decent network connecting the nodes, which can be a bottleneck. I've just tried your case in Digital Ocean and GKE, both give me around 4.7s.

With the Longhorn become more feature complete and better in the usability, we're starting to shift our focus to the performance.

@AmreeshTyagi
Copy link

@yasker Thanks for the confirmation.
With the Longhorn become more feature complete and better in the usability, we're starting to shift our focus to the performance. That's a good news.

@Moumouls
Copy link

Moumouls commented Jul 17, 2019

Hi everyone,
I'just made a bench test with bonnie++ between Longhorn, OpenEBS Jiva, Without Volume
Here are stats, i hope that will help people that search performance results.

Results are biased

Do not take them into account

Without Volume
Data 4GB
Write: 191 473 KB/sec
ReadWriteDelete: 166 443 KB/s
Read: 2 976 754 KB/s

Longhorn (3 replica, Block Device)
Data: 4GB
Write: 5 728 KB/sec
ReadWriteDelete: 5 745 KB/s
Read: 2 820 495 KB/s

OpenEBS Jiva (3 replica)
Data: 4GB
Write: 4 929 KB/sec
ReadWriteDelete: 3 688 KB/s
Read: 3 024 698 KB/s

Context: 3 Node (2 VCPU, 80SSD and 7GB RAM each), same datacenter, OVH

@yasker write performance is low, for you it seems correct ? (bonnie is IOPS intensive i guess)

@yasker
Copy link
Member

yasker commented Jul 17, 2019

@Moumouls This result is a bit weird. On the one side, read performance is way too high, 3G/per second is unlikely real, I suspect there is a cache; on the other side, the write performance is way too low, I suspect the CPU is the bottombeck with small blocks. If it's IOPS intense, it's better if we can know the block size, and IOPS per second. Also, it's interesting to see what's the result of Ceph.

Also, we haven't spent much effort on tuning Longhorn's performance. So you can expect much more in the future release. When we test, we normally use FIO for block device benchmark, with a few different block sizes, and make sure the data is written in direct IO mode. You may want to give FIO a try.

@Moumouls
Copy link

@yasker thanks for the tips. I'have updated my comment.
I just made a test of a full init of an app (Axelor) with volumes mapped to data (tomcat files) and database directory (postgres)

Axelor init time HostPath: 7min
Axelor init time Longhorn: 10min
Context: 3 Node (2 VCPU, 80SSD and 7GB RAM each), same datacenter, OVH

In real use case performance seems now ok !
I'm going to make complementary tests with FIO

@stale
Copy link

stale bot commented Nov 22, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Please use `discussion` to ask questions instead wontfix
Projects
None yet
Development

No branches or pull requests

4 participants