Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Git operations are slow on instances with a large number of machines #7773

Closed
1 task done
dylanlerch opened this issue Sep 14, 2022 · 2 comments
Closed
1 task done
Assignees
Labels
kind/bug This issue represents a verified problem we are committed to solving p2

Comments

@dylanlerch
Copy link

Team

  • I've assigned a team label to this issue

Severity

Not blocking, but significant performance degradation for some customers

Version

Impacts creating releases on 2022.2, impacts all Git operations on 2022.3

Latest Version

I could reproduce the problem in the latest build

What happened?

On Octopus 2022.2 versions, creating releases takes a long time on instances with a lot of machines. On Octopus 2022.3, all Git operations are impacted.

Reproduction

  • On an Octopus 2022.3 version
  • Create ~3000 machines in an instance
  • Fetch a deployment process, deployment settings, or project variable set from a Git project
  • Request time will be very long

Error and Stacktrace

No response

More Information

When writing to and reading from OCL, we convert between slugs (or names on older version of Octopus) and ids. This allows us to have human readable identifiers in the OCL, rather than something like Channels-102, that wouldn't be particularly helpful to anyone looking through OCL files.

At the moment, the way that we load this data from the database is extremely inefficient. We're often loading a lot more data than we actually need to perform this mapping. For instances with a large number of machines, loading this data can take a long time.

Workaround

No response

@dylanlerch dylanlerch added kind/bug This issue represents a verified problem we are committed to solving p2 state/triage labels Sep 14, 2022
@dylanlerch
Copy link
Author

This is only in cloud instances for now, we'll roll this performance fix back to 2022.3 in coming days.

@Octobob
Copy link
Member

Octobob commented Nov 8, 2022

🎉 The fix for this issue has been released in:

Release stream Release
2022.3 2022.3.10594
2022.4 2022.4.3438
2023.1+ all releases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This issue represents a verified problem we are committed to solving p2
Projects
None yet
Development

No branches or pull requests

2 participants