Skip to content

1) Replace record/playback tests to mocks 2) Add LOTS of nightly functional tests  #5977

@mdrichardson

Description

@mdrichardson

Test Adjustments

This proposal covers two related issues:

  1. Converting Record and Playback-style tests in the JS SDK to mocks, and
  2. Creating functional tests for many of our mocked tests

Table of Contents

Current Situation

Record and Playback

Many of the functional tests in the JavaScript SDK rely on Record and Playback style of testing instead of the more traditional mocks. This is problematic because:

  1. It adds additional complexity to the testing codebase, essentially creating a new testing framework that devs need to learn.
  2. It adds additional test recording files to PRs.
  3. Reading test results from files opens an additional avenue for tests that are less deterministic than just using mocks.

Functional Tests

Additionally, where we have mocks or Record and Playback tests (in all SDKs), there is potential the tests to become stale. I've fixed a couple of small issues where this has occurred with both Cosmos and Connector.

Proposed Solution

Converting the JavaScript tests from Record and Playback to mocks is "easy" enough and doesn't need much design guidance--just confirmation that it should be done.

Relevant Libraries and Features

However, we have the following libraries/features that have tests with mocks that could be added to nightly functional tests:

  • CosmosDbPartitioned
  • CosmosDb (non-partitioned)
  • AzureBlob
  • LUIS
  • QnA
  • Adapters (Facebook, Slack, etc)
  • ApplicationInsights
  • BotFrameworkHttpAdapter
  • BotFrameworkHttpClient
  • SkillHttpClient
  • Adaptive Dialogs HTTP Actions
  • Streaming

There are two ways that we can approach this:

  1. Use existing mocked tests and programmatically switch between live/mocked/skipped modes based on presence of environment variables (AppId, AccessKey, etc), or
  2. Write separate functional tests

[Option 1]: Programmatic

For the items that we want nightly functional tests for, we need to design the tests such that running them live can be easily enabled. This could be addressed with the following design (this discussion will be based around .NET, but will apply in JS/Python to the extent possible):

  1. Each relevant test file will have a WillRunLive parameter, which is lazily set by determining whether or not certain environment variables are present.
    1. The required environment variables will be determined by each test file, but will generally be something like an AppId or access key.
    2. Once WillRunLive is determined, there will be logger output that specifies which way the test is running (Live vs. Mocked vs. Skipped)
  2. The "Assemble" section of each unit test will be wrapped by a call to WillRunLive:
    1. If this results in "Skip", Assert.Ignore()
    2. If this results in "Mocked", mocks are set up
    3. If this results in "Live", the mock setup is skipped

Pros:

  • Keeps all tests within the same file
  • No re-writing of tests between mocked and functional

Cons:

  • Developers would need to "learn" how to switch between modes when testing locally

[Option 2]: Separate Files

We could fairly easily break mocked tests out into separate files for nightly functional tests with mostly a copy/paste and then removal of mocks.

Pros:

  • Separation of test purposes
  • Easy to implement

Cons:

  • Multiple places to write tests for the same component

Triggering Functional Tests

Nightly

Yes.

PRs

We could optionally add automation to trigger live functional tests on the relevant feature/library when certain GitHub tags get added. We could have these be added, either when:

  1. The PR gets submitted/changed (this would run fairly frequently)
  2. The PR gets an approval that enables (this would then run once per PR as a sort of final check that the PR is good)
    1. Note: I'm not sure how difficult this one would be

Decisions to Make

  • Do we convert all Record and Playback tests to use Mocks?
  • What libraries/features do we add nightly function testing to, if any?
    • Do we add this programmatically to existing tests, or move functional tests into separate files?
    • How do we want these tests triggered in GitHub, if at all?

Metadata

Metadata

Labels

Area: EngineeringInternal issues that are related to improving code quality, refactorings, code cleanup, etc.ExemptFromDailyDRIReportUse this label to exclude the issue from the DRI report.draftThe issue definition is still being worked on and it is not ready to start development.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions