Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maya: Testing - OP-5060 #5644

Conversation

tokejepsen
Copy link
Member

@tokejepsen tokejepsen commented Sep 23, 2023

DO NOT MERGE

This is now acting as a checklist only.

Changelog Description

This turned into a monstrosity of a PR, so soz! We can split out features if needed but I think they are all needed to improve the testing workflow.
A lot of the focus has been on workflow improvement and transparency of data. I think for testing we should have all data in the repository rather than on cloud storage, because it'll help with external collaborators. There is an argument about data file sizes eventually, but I think we should start with the smallest tests first and build up to large data sizes. Once we hit file size issues, we can evaluate our options but more and faster testing should be the goal at the moment.

I have not tested all applications with these new changes as I wanted to present the workflow first, then I can adjust other hosts to fit the new workflow.

Features:

  • added Maya 2024.
  • added MayaPy. 2022 is not supported because I could not get the environment to work correctly. In 2023-2024 we can utilize a flag -I which will isolate the environment from the user, but this flag does not work in 2022.
  • openpype_mongo command flag for explicit MongoDB address.
  • keep_app_open command flag. This will stop closing of the host for interactive inspection. Find this useful to work with the testing scene data in context of the host, just like you would launch the host normally.
  • app_group command flag. This is for changing which flavour of the host to launch. In the case of Maya, you can launch Maya and MayaPy, but it can be used for the Nuke family as well.
  • app_variant accepts all value. This will fetch all available variants of the host to test against. Each app variant gets its own testing session with output files and database. App variant testing is parametized so it makes it clearer in the output logging.
  • dump_databases command flag. This will dump all databases for the app variant. This helps with updating the expected database. This flag will skip all tests.
  • find_all_available_variants_for_group method for finding all available host variants.
  • ingested expected output files
  • ingested expected database
  • ingested Maya startup script
  • ingested input workfile
  • ingested input database.
  • human readable input database (json format)
  • clean up input database (no logs)
  • pyblish and python errors within Maya is flagged as a failed test.
  • database testing of expected data references the ingested database instead of hardcoded. This allows for easier updating of the expected database.
  • Deadline testing will fail if there are any errors on any jobs.

Changes:

  • test_data_folder command flag changed to data_folder.
  • when running Maya in headless mode, the project was not set.
  • There is a known issue where Maya looses its scene name, so just using "save" wont work. Forcing a rename to the current scene name covers all cases.
  • fixed various plugin imports that flags as erroring. Mainly within the Deadline module where all plugins across hosts have a shared location.
  • when extracting the review, the default resolution should be 0 so the review resolution in inherited from the input.
  • setup_only will skip all tests.
  • Maya testing code is consolidated to a single file.
  • when testing Deadline jobs do not wait for asset dependencies, to speed up testing.
  • use MayaPy as default
  • removed Maya 2020 and older from default settings and thus from testing. Maya 2020 does not launch correctly anymore.

The idea with having MayaPy as a testing host, is that we can now run the tests in a docker container on for CI. I think we should run the tests on PRs on the latest version of MayaPy (2024), then maybe at a later point in the git workflow, we can run more elaborate tests on multiple versions of Maya.

Further thoughts:

  • Parallelize app variant tests. With the plugin xdist for pytest, we could run the testing of the app variants in parallel. We would also need to spin up multiple Deadline workers but it could reduce the testing time.
  • Separate Deadline tests. Testing through Deadline like rendering, can take a while to process. There could potentially be a flag for skipping testing of certain modules.

Testing notes:

NOTE: TESTING WILL EASE DATABASE, SO BACKUP LOCAL DATABASE OR POINT TO A NEW ONE!

  1. Testing latest available MayaPy
openpype_console runtests C:\Users\tokejepsen\OpenPype\tests\integration\hosts\maya --openpype_mongo "mongodb://localhost:2707/"
  1. Testing all available Maya (2022-2024)
openpype_console runtests C:\Users\tokejepsen\OpenPype\tests\integration\hosts\maya --app_group maya --app_variant all --openpype_mongo "mongodb://localhost:2707/"

@BigRoy
Copy link
Collaborator

BigRoy commented Sep 25, 2023

Also I have noticed you are moving imports into process method in some deadline plugins. I am not sure why would this be problem now?

I suspect the plugins for e.g. Houdini failing to load correctly when running tests for Maya would show "errored plugins" which might not be what you'd want when running tests since you'd expect tests to run without errors :)

@tokejepsen
Copy link
Member Author

Also I have noticed you are moving imports into process method in some deadline plugins. I am not sure why would this be problem now?

I suspect the plugins for e.g. Houdini failing to load correctly when running tests for Maya would show "errored plugins" which might not be what you'd want when running tests since you'd expect tests to run without errors :)

Yup @BigRoy is right. When testing and validating the publish results we don't want any errors even if they are trivial like skipping plugins due to imports.

@tokejepsen
Copy link
Member Author

I believe this should be definitely divided into smaller PRs anybody will review it at all. How about to split this into:

  1. testing backend changes + new cli arg (keep also old since it is used in CI testing)
  2. all hosts > reflecting smallest possible changes related to new code changes from testing backand from 1.
  3. New Maya variant addition
  4. Maya integration tests changes
  5. Deadline changes

Ideally each of the PRs should be testable independently from other PRs - but I do realize this might be a huge limit so It is also possible to reference dependent PR in the description.

Issue here is that nobody will review this beast and it needs to be devided before.

Cool, will get on splitting this PR into smaller ones.

@mkolar
Copy link
Member

mkolar commented Feb 9, 2024

Because we're splitting OpenPype into ayon-core and individual host addons, this PR would have to be re-created to target one of those.

Testing infrastructure specifically will most probably move to a separate repo and this one is almost fully broken up and merge anyways.

@mkolar mkolar closed this Feb 9, 2024
@ynbot ynbot added this to the next-patch milestone Feb 9, 2024
@jakubjezek001 jakubjezek001 removed this from the next-patch milestone Feb 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
host: Houdini host: Maya host: Nuke host: 3dsmax Autodesk 3dsmax module: Deadline AWS Deadline related features port to AYON size/XXL Denotes a PR changes 2500+ lines, ignoring general files type: enhancement Enhancements to existing functionality
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

8 participants