Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial (bare-bones) implementation of the infrastructure for end-to-end tests #2691

Merged
merged 8 commits into from Oct 26, 2022

Conversation

narrieta
Copy link
Member

This is the very first basic implementation of the end-to-end tests, based in the POC replacement for DCR under the 'dcr' subdirectory.

At this point, it simply executes 2 skeleton tests using the LISA framework. Additional functionality will be implemented in future PRs. The present PR starts defining patterns for implementing end-to-end tests.

@@ -0,0 +1,21 @@
variables:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entry point for Azure Pipelines

@@ -0,0 +1,82 @@
name: azure
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, the interface with LISA is under the 'lisa' subdirectory (it will be moved one level up in a future PR). This YML file is the entry point to LISA.

@@ -0,0 +1,23 @@
#!/usr/bin/env python
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skeleton for the current scenarios\agent-bvt\01_check_waagent_version.py test in DCR

@@ -0,0 +1,45 @@
import argparse
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skeleton for the current scenarios\agent-bvt\06_customscript_20.py test in DCR

@@ -0,0 +1,41 @@
from assertpy import assert_that
Copy link
Member Author

@narrieta narrieta Oct 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skeleton to define the 2 basic patterns for the end-to-end tests. Currently hardcoded to specific tests, but will be abstracted in future PRs.

@codecov
Copy link

codecov bot commented Oct 25, 2022

Codecov Report

Merging #2691 (37f03a1) into develop (527443c) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff            @@
##           develop    #2691   +/-   ##
========================================
  Coverage    72.06%   72.06%           
========================================
  Files          103      103           
  Lines        15702    15702           
  Branches      2237     2237           
========================================
  Hits         11315    11315           
  Misses        3872     3872           
  Partials       515      515           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

)
class AgentBvt(TestSuite):
@TestCaseMetadata(description="", priority=0)
def check_agent_version(self, node: Node, log: Logger) -> None:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pattern 1: Execute a test remotely over SSH

assert_that(result.exit_code).is_equal_to(0)

@TestCaseMetadata(description="", priority=0)
def custom_script(self, node: Node) -> None:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pattern 2: Execute a test locally (local to the orchestrator) to trigger extensions on the remote test VM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At what point is the remote test VM created?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LISA does that. In the output of the task you will see something similar to

2022-10-25 15:13:49.326[140692391196416][INFO] lisa.[azure].deploy[generated_0] vm setting: AzureNodeSchema(name='lisa-azure-20221025-151340-053-e0-n0', short_name='lisa--053-e0-n0', vm_size='Standard_DS2_v2', maximize_capability=False, location='westus2', subscription_id='8e037ad4-****-****-****-********c15b', marketplace_raw={'publisher': 'canonical', 'offer': 'ubuntuserver', 'sku': '18.04-lts', 'version': '18.04.202210180'}, shared_gallery_raw=None, vhd='', hyperv_generation=1, purchase_plan=None, is_linux=True)
2022-10-25 15:13:53.044[140692391196416][INFO] lisa.[azure].deploy[generated_0] resource group 'lisa-azure-20221025-151340-053-e0' deployment is in progress...
2022-10-25 15:15:26.749[140692391196416][INFO] lisa.[azure].deploy[generated_0] deployed in 99.972 sec
2022-10-25 15:15:26.754[140692483270400][INFO] lisa.env[generated_0].node[0] initializing node 'lisa-azure-20221025-151340-053-e0-n0' w****t@20.230.174.145:22

@TestCaseMetadata(description="", priority=0)
def custom_script(self, node: Node) -> None:
node_context = get_node_context(node)
subscription_id = node.features._platform.subscription_id
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently reaching into the internal "_platform" object to get the Subscription ID. This should be done by implementing a LISA feature that exposes subscription ID/resource group name/vm name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we are creating multiple test VMs with different images, how will LISA know which nodes this test should be ran with? Will that be specified in the runbook?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we will create a combinator to execute each test on each distro and add it to the runbook

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the node details that it's trying to fetch for the actual test vm not the orchestrator vm right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the Node represents the VM created by LISA

@@ -0,0 +1,13 @@
# This is a list of pip packages that will be installed on both the orchestrator and the test VM
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requirements for the orchestrator/test VM. Comes directly from the 'dcr' POC, but should be splitted in 2 (one for orchestrator, one for test VM)

@@ -0,0 +1,99 @@
from __future__ import print_function
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the files under the "scenario_utils" provide infrastructure to execute the tests. They come directly from the 'dcr' POC, which in turn copied them from the DCR repo.

These modules require significant improvements, specially around error handling and diagnostic logging. These improvements will come on future PRs. Feel free to skip those in this initial review.

@@ -0,0 +1,239 @@
import time
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The files under 'scenario_utils' come straight from the 'dcr' POC, which in turn copied them from the current DCR implementation. They need significant improvements on error handling and diagnostics logging, feel free to skip them for this review.

Copy link
Contributor

@maddieford maddieford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments for my own understanding

name: sshKey
displayName: "Download SSH key"
inputs:
secureFile: 'id_rsa'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does this task download the secureFile 'id_rsa' from?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Azure Pipeline includes a Library where one uploads secure files to, and then this task downloads it from there.

BTW, a nice feature of these tasks is that their output in the pipeline includes a link to the documentation for the task. e.g.

==============================================================================
Task         : Download secure file
Description  : Download a secure file to the agent machine
Version      : 1.200.0
Author       : Microsoft Corporation
Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/download-secure-file
==============================================================================

@@ -0,0 +1,82 @@
name: azure
extension:
- "../testsuites"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the directory all of the LISA tests need to be in? Or will LISA look for tests in the entire /lisa directory

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LISA runbook points to the test location

extension:
  - "../testsuites"

assert_that(result.exit_code).is_equal_to(0)

@TestCaseMetadata(description="", priority=0)
def custom_script(self, node: Node) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At what point is the remote test VM created?

@TestCaseMetadata(description="", priority=0)
def custom_script(self, node: Node) -> None:
node_context = get_node_context(node)
subscription_id = node.features._platform.subscription_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we are creating multiple test VMs with different images, how will LISA know which nodes this test should be ran with? Will that be specified in the runbook?

cse.get_ext_props(settings={'commandToExecute': "echo \'Hello again\'"})
]

cse.run(ext_props=ext_props)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed all of the logging from cse.run is being logged as lisa.stderr. Why is lisa capturing these as stderr instead of stdout?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output is coming from BaseExtensionTestClass (in the scenatio_utils directory). This code comes straight from the current DCR and will need a lot of improvements, for example writing to the log file instead of stdout/stderr.

Copy link
Contributor

@nagworld9 nagworld9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like some of the places we pinned/hardcoded, may need abstraction. I believe you would change in later PRs.

@TestCaseMetadata(description="", priority=0)
def custom_script(self, node: Node) -> None:
node_context = get_node_context(node)
subscription_id = node.features._platform.subscription_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the node details that it's trying to fetch for the actual test vm not the orchestrator vm right?


cse = CustomScriptExtension(extension_name="testCSE")

ext_props = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we rewrite here? it already there in add_cse() function in Customerscript extension. Can we reuse that to run it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add_cse() does more than I need for this prototype (slowing the execution down), and, in any case, we will need to do a major rewrite of the BaseExtensionTest class we brought from DCR. In my oipinion the abstraction it presents is not appropriate and, more importantly, it handles errors poorly and does not output enough diagnostic info to debug failures.

cd $BUILD_SOURCESDIRECTORY/lisa

# LISA needs both the public and private keys; generate the former
chmod 700 $SSHKEY_SECUREFILEPATH
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are these some sort of environment vars?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are defined by the tasks in the Azure pipeline. Many tasks expose their output/parameters as environment variables (those variables are defined in the documentation for each task).

In this case, SSHKEY_SECUREFILEPATH comes from the "sshkey" task that downloads the private ssh key. The task appends the path for the download to the name and capitalizes, hence the variable name.

Copy link
Member Author

@narrieta narrieta Oct 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, the DCR pipeline requires you to add the sshPublicKey environment variable when you create a new instance of the pipeline. Now we generate the public key from the private key at run time (as an alternative we could also have uploaded it as a secure file and then add another task to download it, but that does not seem to maintain the mode of the file, which need to be 700, so I just did it as code.)

- name: vm_name
value: ""
- name: marketplace_image
value: "Canonical UbuntuServer 18.04-LTS latest"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this is not in urn format. Is this how we suppose to define images?
How can we pass multiple images? how each scenario override this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll define a combinator to pass multiple distros. The LISA documentation has some info on how to do that, but we can always work with them if we need more info or features.

pr: none

pool:
vmImage: ubuntu-latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this pool for orchestrator vms?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is the pool of Azure Pipelines VMs where we run the orchestrator (singular, now we have only 1 orchestrator)

Copy link
Contributor

@nagworld9 nagworld9 Oct 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean you plan to keep only 1 for all scenarios or at this point we have one?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 at this point. the actual number will depend on how we organize the test runs and how much we can take advantage of LISA's capabilities.

@narrieta narrieta merged commit a1f3049 into Azure:develop Oct 26, 2022
@narrieta narrieta deleted the tests-e2e branch October 26, 2022 16:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants