Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

contact_detail apdex: Define benchmarking scenario #72

Closed
Tracked by #69
michaelkohn opened this issue Mar 29, 2024 · 7 comments
Closed
Tracked by #69

contact_detail apdex: Define benchmarking scenario #72

michaelkohn opened this issue Mar 29, 2024 · 7 comments
Assignees
Labels

Comments

@michaelkohn
Copy link
Member

  1. Which config(s) will be used
  2. Which device(s)
  3. How many docs / data distribution for the user
  4. Detailed clickpath for the scenario
@michaelkohn michaelkohn changed the title Define apdex benchmarking scenario for "contact_detail" Apdex for "contact_detail": Define benchmarking scenario Mar 29, 2024
@michaelkohn michaelkohn changed the title Apdex for "contact_detail": Define benchmarking scenario contact_detail apdex: Define benchmarking scenario Mar 29, 2024
@latin-panda
Copy link
Collaborator

latin-panda commented Apr 2, 2024

Initial data analysis I made related to this work

Configs

We will run performance testing on the config we are familiar with, and covering 3 major regions:

  • Kenya (East Africa)
  • Nepal (Asia)
  • Togo (West Africa)

Devices

The phone use by Kenya deployment:

  • Brand: Neon Ray Ultra by Safaricom
  • RAM: 2GB - 1.2GB in use all the time.
  • Processor: octa-core
  • Android: 13
  • Storage: 22 - 8GM in use
  • Battery: 3750mAh
  • CHT-Android: 1.2

User type

  • offline CHW user

Data

For CHWs

  • 1 CHW Area
  • 100 household
  • 6 members in each household
  • 5 reports per member

Iterations

We will run the automation tests using at least 1 user and simulate 5 days, each day running 10 times the suite.

Clickpath

As an offline user, access the application and do the following navigation for each contact type:

  • CHW Area:
    • Tap on the Contact's tab, and wait for the content to load
    • Tap on the CHW Area that has at least 100 households, and wait for all content to load
  • Household:
    • Tap on the Contact's tab
    • Tap on a household with at least 6 members, and wait for all content to load
  • Patient:
    • Tap on the Contact's tab
    • Tap on a household with at least 6 members, and wait for all content to load
    • Tap on a female patient with at least 5 reports and 3 tasks, and wait for all content to load
      At the end of the tests, advance 1 day and sync the app to generate the telemetry records.

@latin-panda
Copy link
Collaborator

@michaelkohn I still have to do the 4 Detailed click-path for the scenario, but you can have a look and let me know your thoughts

The Nepal data you see there is what I could see from Raphael's test in our test instance for that config. I haven't properly analyzed that full data (the other ticket works) because it wasn't syncing to Postgresql, so I added the questions marks.

@michaelkohn
Copy link
Member Author

Thanks @latin-panda, great start. I don't know what this will all look like when we get to the point of publishing it, but this is really useful in starting to shake out all the details 🙏🏼

Config

  1. It might be the case that the apdex score is already above .94 for some configs and not for others, that's totally fine. We don't need our baseline to represent every config scenario. Obviously it's most impactful if it helps more people, but it's also impactful if it just helps 1 big project.
  2. When identifying the baseline config, I'd imagine we'll want to be able to have a snapshot of it saved with our tests so that it can be recreated even if we end up changing the config in the future
  3. I don't think we need to document every nuance of the baseline config, but I do think it would be useful to briefly describe it in words to include some high level details like "contact summary has 5 fields on it, there are 2 condition cards", etc...

Devices

  1. I'd imagine we'll want to provide the specs / versions of the testing phone

User type

  1. During our call today, I mentioned that the "Supervisor" user should also be offline. I don't think the current test suite includes supervisors anyway and I'm OK with not including them for now... you mentioned it as a stretch goal which is fine, I'm also OK with removing it for now.

Data

  1. We'll want to know how many reports there are per contact

Clickpath scenario

  1. I imagine these will just be very simple like "Tap on a household from Contact Page List View" or "Tap on person from Contact Page Contact Detail View (Household)"
  2. And like you have set out in the current performance duration section, I think we should evaluate each level separately. For example (using a typical CHW Area/Household/Person hierarchy), baseline for... CHW Area, Contact of CHW Area (this is probably just the user's contact/self), Household, Contact of the Household.

Data

  1. I think we should be focusing on the apdex score for the given scenario (CHW Area, Contact of CHW Area, Household, Contact of Household). It's useful to see the counts because it gives a sense of scale to the tests but...
  2. Apdex treats tolerable and frustrated differently in the calculation, so we need to differentiate tolerable from frustrated. For example... your apdex score will be different if you have 5 tolerable and 0 frustrated vs. 0 tolerable and 5 frustrated.

@latin-panda
Copy link
Collaborator

I'd imagine we'll want to be able to have a snapshot of it saved with our tests so that it can be recreated even if we end up changing the config in the future

@michaelkohn Do you mean like the Apdex result so we can compare later?

@latin-panda
Copy link
Collaborator

@michaelkohn I have updated the info based on your comments. Do you think something is missing? If we are good, I will close the ticket as completed.

Togo Info will have to wait until next week when the tests are ready

@michaelkohn
Copy link
Member Author

I'd imagine we'll want to be able to have a snapshot of it saved with our tests so that it can be recreated even if we end up changing the config in the future

@michaelkohn Do you mean like the Apdex result so we can compare later?

Nope, i meant keeping a snapshot of the actual config from when the tests were executed... I wouldn't expect it to change in the short term, but I can imagine in the future we may need to change the config to either accommodate some cht-core change or an important change to a project's config.

Do you think something is missing?

Performance baseline

I actually think we should remove this section since it is covered in #69.

Clickpath

Thinking about this some more, do you think it's useful to add details like "load a contacts list that has 100 hh's in it, select the 5th household"....

@latin-panda
Copy link
Collaborator

Nope, i meant keeping a snapshot of the actual config from when the tests were executed.

Yes, we have a snapshot of every config in the Fast UI folder.

I've applied the last feedback and closed this as complete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

No branches or pull requests

2 participants