Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dolphin release candidate testing - Defender Product Assessment Sanity Testing #179

Closed
nanda-katikaneni opened this issue Mar 8, 2023 · 3 comments
Assignees
Labels
Testing This issue or task involves testing the automation tool function
Milestone

Comments

@nanda-katikaneni
Copy link
Collaborator

nanda-katikaneni commented Mar 8, 2023

💡 Summary

In preparation of releasing the Dolphin or v0.3.0 of ScubaGear code, conduct sanity testing of the Defender product. Objective and Scope of the task are provided below.

Objectives:

  1. There are no regression issues in the Defender product assessments with Dolphin Release
  2. Additional sanity testing to ensure that: each policy assessment result is shown in the report, assessment works against all available tenants (G5/E5, G3/E3)
  3. Defender product assessment works both in interactive and non-interactive (service principal) modes.

Scope:

  1. Detailed functional testing of each policy statement result is out of scope
  2. Consistent results between interactive/non-interactive modes and no operational issues in running the test against all tenant types are within the scope.

Motivation and context

This would be useful to ensure that Dolphin release is stable

Implementation notes

Before the test, ensure that test user has minimum user role on a given tenant to assess Defender (look into README). Then, execute the Defender product assessment on all available tenants – first in interactive mode and then in non-interactive mode. After the test verify the following

  1. Verify that all tests run without errors and results reports are generated. Each policy has a result (no empty results)
  2. Ensure that there are no regressions from Coral release – for the tested tenant compare the result report from current assessment against saved Coral release results - ensure that any different result is consistent with code change (provide a detailed explanation on any observed diff in results)

Acceptance criteria

  1. Defender product assessment works in both interactive and non-interactive mode against G5, E5, G3 and E3 tenants.
  2. There are no crashes and/or empty results
  3. Results are consistent with Coral release assessment results - any diff is consistent with code changes.
@nanda-katikaneni nanda-katikaneni added the Testing This issue or task involves testing the automation tool function label Mar 8, 2023
@schrolla schrolla added this to the Dolphin milestone Mar 13, 2023
@ssatyapal123 ssatyapal123 self-assigned this Mar 16, 2023
@ssatyapal123
Copy link
Contributor

G5 results are as expected based on the policy settings, with one error due to known issue. MSFT deprecated the two alert policies the Rego is checking for. As we called out the alert policies in the baseline, this requires a baseline update and a rego update. This will addressed during Emerald.
image

@ssatyapal123
Copy link
Contributor

To confirm policy changes are properly working, ran a quick test on two policies on Commercial E5 tenant:

  • Changed Policy 2.2 (added data types ITIN and SSN to the Default policy and followed the implementation steps) however this policy is still showing a failure. Believe this is not a problem with Dolphin but with the tenant configuration.

image

  • Changed Policy 2.3 (enabled the common attachments filter in the threat policy for email & collaboration) the Tenant is now showing a pass.

Before:
image

After:
image

@nanda-katikaneni
Copy link
Collaborator Author

After further discussions with Shanti, seems like the Policy 2.2 result for E5 is same in v0.2.1 as well. So there is no regression with Dolphin. On 'why it is failing when it is expected to pass' - it will be investigated further and if needed a separate issue will be opened for a future release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Testing This issue or task involves testing the automation tool function
Projects
None yet
Development

No branches or pull requests

3 participants