Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache Layer: recreate analysis with benchmark tool #39

Open
anibal-aguila opened this issue Sep 28, 2022 · 5 comments
Open

Cache Layer: recreate analysis with benchmark tool #39

anibal-aguila opened this issue Sep 28, 2022 · 5 comments

Comments

@anibal-aguila
Copy link

anibal-aguila commented Sep 28, 2022

Hello,
The wiki suggest use the benchmark Tool: https://github.com/rakyll/hey
but the output seams become from perf, in this case running:

go run mage.go compare AAA BBB

Please @chlins could you share the full guide and files to get our own results of Response Time, TPS and Success Rate.

Thanks in advance,

@anibal-aguila anibal-aguila changed the title Cache Layer: recreate analysis with with benchmark tool Cache Layer: recreate analysis with benchmark tool Sep 28, 2022
@chlins
Copy link
Member

chlins commented Sep 28, 2022

@anibal-aguila Hi, the comparison diagram in the wiki were rendered manually, not generated by this repo script, so if you want to use compare command, you need to use this repo scripts as benchmark tool, hey is only suitable for single specific API test.

@anibal-aguila
Copy link
Author

Hi @chlins, yes I'm doing it with perf and don't get major difference in performance analysis.

Could be great if the results of harbor cache layer comparative analysis become standardized with perf to retrive results of Response Time, TPS and Success Rate.

I Share the actuall comparion between harbor cache enabled and disabled.

image

image

  export  HARBOR_URL=https://HARBOR-INSTANCE 
  export  HARBOR_VUS=100 
  export  HARBOR_ITERATIONS=600 
  export  HARBOR_SIZE=ci 
  export  HARBOR_REPORT=true 
  go run mage.go all

Thanks in advance,

@chlins
Copy link
Member

chlins commented Sep 28, 2022

I think the testing scenario is different, in the wiki page, we only benchmark the manifest API by hey, and the comparison of tps, response if only for this API. But as you showed above, this repo script testing many harbor APIs, some APIs have no improvement by cache layer. The cache layer only benefits for the scenario of high concurrent pulling image manifests because our design around this case.(in fact, we do not have the test case for manifest API in this repo).

@anibal-aguila
Copy link
Author

anibal-aguila commented Sep 28, 2022

I see, actually we are using the official docker installation way on a VM with:

Total online memory:      16G

CPU(s):                4 
Architecture:       x86_64 
Model name:       Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz

A few questions about that:

  • There are some API call that can retrieve confirmation if layer cache is up and running? We just enable it in harbor.yml after that don't found any feedback from harbor instance.
  • Why this enhance doesn't work cross Harbor APIs?. As the wiki page show the improvement could be substantial positive for real world production escenarios.
  • Could you share how the manually rendered report was generated? And for instance some way to standardize it?

Thanks in advance,

@chlins
Copy link
Member

chlins commented Sep 29, 2022

  • Unless API response time, you can also monitor the resource usage, and the db connections.
  • Yes, it can work cross API, but not from one point, should also be looked at in conjunction with other metrics such as db connections.
  • Collected the results and draw the diagram by https://chartcube.alipay.com/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants