Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vitess Continuous Performance Testing #141

Closed
dkhenry opened this issue Jun 16, 2020 · 6 comments
Closed

Vitess Continuous Performance Testing #141

dkhenry opened this issue Jun 16, 2020 · 6 comments

Comments

@dkhenry
Copy link

@dkhenry dkhenry commented Jun 16, 2020

Please fill out the details below to file a request for access to the CNCF Community Infrastructure Lab. Please note that access is targeted to people working on specific open source projects; this is not designed just to get your feet wet. The most important answer is the URL of the project you'll be working with. If you're looking to learn Kubernetes and related technologies, please try out Katacoda.

First and Last Name

Daniel Kozlowski
Akilan Selvacoumar

Email

koz@planetscale.com
as251@hw.ac.uk

Company/Organization

PlanetScale
Heriot-Watt University

Job Title

Minister of Engineering
Open Source

Project Title (i.e., summary of what do you want to do, not what is the name of the open source project you're working with)

Nightly Performance Testing of Vitess

Briefly describe the project (i.e., what is the detail of what you're planning to do with these servers?)

We would like to set up a nightly CI test of Vitess to report on performance over time. The idea would be every night we would pull the main branch and run a standard test on it, and report out the results

Is the code that you’re going to run 100% open source? If so, what is the URL or URLs where it is located? What is your association with that project?

yes https://github.com/vitessio/vitess

What kind of machines and how many do you expect to use (see: https://www.packet.com/bare-metal/)?

A single instance should be fine. We have ran some initial tests with m2.xlarge. the forthcoming c3.xlarge instances would be good. Anything with NVMe would work

What OS and networking are you planning to use (see: https://support.packet.com/kb/articles/supported-operating-systems)?

Centos8

Any other relevant details we should know about?

Initially we had grand plans of doing a large scale tests of Vitess on the lab, but provisioning them took a while. Once we had done the orchestration we realized it would be good to have this run in such a way as to automate it. So this request is two fold. First we are ready to run the large scale tests across a large number of nodes ( 16 to 32 m2.xlarge along with 3 n2.xlarge ) but we don't want to do that without giving a heads up. Second we would like to nightly run tests.
Initial Asked in this Issue #107

@dankohn
Copy link
Contributor

@dankohn dankohn commented Jun 16, 2020

Agreed on the single server. Please update the request when you want to move to scalability to ensure that Packet has the capacity.

@dkhenry
Copy link
Author

@dkhenry dkhenry commented Jun 16, 2020

@dankohn I currently have access to the CNCF project, but we would like to get @Akilan1999 access to actually implement the system.

@dankohn
Copy link
Contributor

@dankohn dankohn commented Jun 16, 2020

@taylorwaggoner can provide.

@taylorwaggoner
Copy link
Contributor

@taylorwaggoner taylorwaggoner commented Jun 16, 2020

@dkhenry are you wanting to add Akilan to the existing project, Native Large Scale Vitess Testing, or would you want me to create a new project for nightly testing and add you both? thanks!

@dkhenry
Copy link
Author

@dkhenry dkhenry commented Jun 17, 2020

@taylorwaggoner
Copy link
Contributor

@taylorwaggoner taylorwaggoner commented Jun 17, 2020

I've added Akilan to the existing project. If anything changes and you'd like to have a separate project, please let me know. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants