New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MM traffic test: 5% Portland K8s #516

Closed
jwhitlock opened this Issue Sep 21, 2017 · 1 comment

Comments

Projects
1 participant
@jwhitlock
Contributor

jwhitlock commented Sep 21, 2017

Using the Maintenance Mode plan (issue #409), put SCL3 and AWS production deployments into Maintenance mode, and send 5% or more of MDN traffic to the AWS servers for up to an hour. Monitor with New Relic, Sentry, and manual testing. Record the results and open new issues as needed.

@jwhitlock jwhitlock created this issue from a note in MDN AWS Migration (In Progress (Max 6)) Sep 21, 2017

@jwhitlock jwhitlock added AWS MDN labels Sep 21, 2017

@jwhitlock

This comment has been minimized.

Show comment
Hide comment
@jwhitlock

jwhitlock Sep 21, 2017

Contributor

This was successful.

As described in #487 (comment), there are several traffic policies set up. We started in policy v1, with 100% going to SCL3, and spent about 30 minute trying to determine if the DNS change had been applied. We couldn't see it using dig or other tools.

We then switched to v2, with 5% of the traffic going to AWS. We could see MDN requests in papertrail and AWS tools. There appeared to be no impact in New Relic or Google Analytics. There were some errors recorded in Sentry for celerybeat, but none related to requests. The resource impact on AWS caching servers was minimal, about 0.005 CPU.

We switched back to regular mode in production.

For tomorrow's test, we'll try policies that send more traffic to Portland K8s - 5%, then 15%, then 50%, then 100%, as time and resources allow.

Contributor

jwhitlock commented Sep 21, 2017

This was successful.

As described in #487 (comment), there are several traffic policies set up. We started in policy v1, with 100% going to SCL3, and spent about 30 minute trying to determine if the DNS change had been applied. We couldn't see it using dig or other tools.

We then switched to v2, with 5% of the traffic going to AWS. We could see MDN requests in papertrail and AWS tools. There appeared to be no impact in New Relic or Google Analytics. There were some errors recorded in Sentry for celerybeat, but none related to requests. The resource impact on AWS caching servers was minimal, about 0.005 CPU.

We switched back to regular mode in production.

For tomorrow's test, we'll try policies that send more traffic to Portland K8s - 5%, then 15%, then 50%, then 100%, as time and resources allow.

@jwhitlock jwhitlock closed this Sep 21, 2017

@jwhitlock jwhitlock changed the title from MM traffic test: 5% AWS to MM traffic test: 5% Portland K8s Sep 21, 2017

@jwhitlock jwhitlock moved this from In Progress (Max 6) to Review in MDN AWS Migration Sep 21, 2017

@metadave metadave moved this from Review to Complete in MDN AWS Migration Sep 26, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment