Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Feature python integration tests #1350

Closed
wants to merge 25 commits into from

Conversation

awharrison-28
Copy link
Contributor

@awharrison-28 awharrison-28 commented Jun 5, 2023

Motivation and Context

This PR is intended to add stability to the python integration tests.

Description

  • add specific version of setup tools to the build-system requirements
  • downgrade torch from 2.0.1 -> 2.0.0 (again) as 2.0.1 has an ubuntu regression around cuda dependencies.

See Actions runs:

Contribution Checklist

awharrison-28 and others added 16 commits May 19, 2023 16:19
### Motivation and Context
This PR slims down the number of integration tests running against AOAI
and OAI models - reducing time and resources spent running them.
Additional justification for slimming down these tests is to reduce the
chance of throttling from these endpoints (leads to unstable integration
tests).

The scenarios previously being covered are already handled using HF
models which are much less expensive to test against and do not run the
risk of throttling.


### Description
- added a top-level test `conftest.py` to handle kernel creation and
OpenAI model secret handling. I had originally intended `create_kernel`
to do more than just create a kernel, but additional setup wasn't
needed. I've left the pytest fixture though since other fixtures can
call it, and using it in tests can making importing Kernel from
semantic_kernel unncessary.
- added a completions `conftest.py` to set up completions tests. For
example, setup_hf_text_completion_function allows for testing
text2text_generation and text_generation models using the same test
file.
- Common test code is now in pytest fixtures instead of common methods.
- For a number of the completion tests, I have broken out the asserts to
individual tests instead of running one giant test. This makes it easier
to identify regressions in specific patterns around invoking skills.
- Added retry logic to conversationSummarySkill
- renamed tests to be more descriptive.
@github-actions github-actions bot added the python Pull requests for the Python Semantic Kernel label Jun 5, 2023
@awharrison-28 awharrison-28 self-assigned this Jun 9, 2023
@awharrison-28 awharrison-28 marked this pull request as ready for review June 16, 2023 22:22
shawncal
shawncal previously approved these changes Jun 16, 2023
@shawncal shawncal changed the title Feature python integration tests Python: Feature python integration tests Jun 29, 2023
@awharrison-28 awharrison-28 requested a review from a team as a code owner July 5, 2023 17:42
@GadiZimerman
Copy link

@CodiumAI-Agent

@CodiumAI-Agent
Copy link

PR Analysis

  • 🎯 Main theme: Improving stability of Python integration tests
  • 🔍 Description and title: Yes
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Minimal and focused: Yes, the PR is focused on improving the stability of Python integration tests by adjusting the build system requirements and the parallelization of the tests.
  • 🔒 Security concerns: No, the changes in this PR do not introduce any obvious security concerns.

PR Feedback

  • 💡 General PR suggestions: The PR is generally well-structured and focused. However, it would be beneficial to add tests that verify the changes made in this PR. Also, consider adding comments in the code to explain why specific versions of dependencies are required (e.g., setuptools>=67.8.0).

  • 🤖 Code suggestions:

    • relevant file: python/pyproject.toml
      suggestion content: Consider specifying the exact version of 'setuptools' that is known to work with your project, instead of '>=67.8.0'. This can help avoid potential issues with future versions of 'setuptools'. [important]

    • relevant file: .github/workflows/python-integration-tests.yml
      suggestion content: Consider adding a comment explaining why 'max-parallel' was increased from 1 to 3. This will help future contributors understand the reasoning behind this change. [medium]

How to use

Tag me in a comment '@CodiumAI-Agent' to ask for a new review after you update the PR.
You can also tag me and ask any question, for example '@CodiumAI-Agent is the PR ready for merge?'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python Pull requests for the Python Semantic Kernel
Projects
Archived in project
Status: Sprint: Done
Development

Successfully merging this pull request may close these issues.

None yet

5 participants