Skip to content

Commit

Permalink
test(robot): migrate test_backup_volume_list
Browse files Browse the repository at this point in the history
Signed-off-by: Yang Chiu <yang.chiu@suse.com>
  • Loading branch information
yangchiu committed May 24, 2024
1 parent 099c719 commit 1072de2
Show file tree
Hide file tree
Showing 20 changed files with 524 additions and 162 deletions.
2 changes: 1 addition & 1 deletion docs/content/manual/functional-test-cases/ui.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Accessibility of Longhorn UI
| 3. | Access Longhorn UI with node port | 1. Create a cluster (3 worker nodes and 1 etcd/control plane) in rancher, Go to the default project.<br>2. Go to App, Click the launch app.<br>3. Select longhorn.<br>4. Select `NodePort` under the Longhorn UI service.<br>5. Once the app is deployed successfully, click the link like [32059/tcp](http://104.131.80.163:32059/) appears in App page.<br>6. The page should redirect to longhorn UI - [http://node-ip:32059/#/dashboard](http://104.131.80.163:32059/#/dashboard)<br>7. Verify all the pages, refresh each page and verify. Create a volume and check the volume detail page also. |
| 4. | Access Longhorn UI with ingress controller | 1. Create a cluster(3 worker nodes and 1 etcd/control plane).<br>2. Deploy longhorn.<br>3. Create ingress controller. refer [https://longhorn.io/docs/1.0.1/deploy/accessing-the-ui/longhorn-ingress/](https://longhorn.io/docs/1.0.1/deploy/accessing-the-ui/longhorn-ingress/)<br>4. If cluster is imported/created in rancher create ingress using rancher UI by selecting `Target Backend` as `longhorn frontend` and path `/`<br>5. Access the ingress. It should redirect to longhorn UI.<br>6. Verify all the pages, refresh each page and verify. Create a volume and check the volume detail page also. |
| 5. | Access Longhorn UI with a load balancer | 1. Create a cluster (3 worker nodes and 1 etcd/control plane) in rancher.<br>2. Create a route 53 entry pointing to worker nodes of the cluster in AWS.<br>3. Deploy longhorn from catalog library and mention the route 53 entry in the load balancer.<br>4. Go to the link that appears on the app page for the longhorn app.<br>5. The page to redirect to longhorn UI with URL as route 53 entry.<br>6. Verify all the pages, refresh each page and verify. Create a volume and check the volume detail page also.
| 6. | Access Longhorn UI with reverse proxy | 1. Create a cluster (3 worker nodes and 1 etcd/control plane) in rancher, Go to the default project.<br>2. Go to App, Click the launch app.<br>3. Select longhorn.<br>4. Select `NodePort` under the Longhorn UI service.<br>5. Install nginx in local system.<br>6. Set the `proxy_pass` of [http://node-ip:32059](http://104.131.80.163:32059/#/dashboard) in ngnix.conf file as per below example.<br>7. Start nginx<br>8. Access the port given in `listen` parameter from nginx.conf. ex - //localhost:822<br>9. The page should redirect to longhorn UI<br>10. Verify all the pages, refresh each page and verify. Create a volume and check the volume detail page also.
| 6. | Access Longhorn UI with reverse proxy | 1. Create a cluster (3 worker nodes and 1 etcd/control plane) in rancher, Go to the default project.<br>2. Go to App, Click the launch app.<br>3. Select longhorn.<br>4. Select `NodePort` under the Longhorn UI service.<br>5. Install nginx in local system.<br>6. Set the `proxy_pass` of [http://node-ip:32059](http://104.131.80.163:32059/#/dashboard) in nginx.conf file as per below example.<br>7. Start nginx<br>8. Access the port given in `listen` parameter from nginx.conf. ex - //localhost:822<br>9. The page should redirect to longhorn UI<br>10. Verify all the pages, refresh each page and verify. Create a volume and check the volume detail page also.

nginx.conf example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ https://github.com/longhorn/longhorn/issues/467
*And* storageClass in `longhorn-storageclass` configMap should not have `recurringJobs`.
*And* storageClass in `longhorn-storageclass` configMap should have `recurringJobSelector`.
```
recurringJobSelector:{"name":"snapshot-1-97893a05-77074ba4","isGroup":fals{"name":"backup-1-954b3c8c-59467025","isGroup":false}]'
recurringJobSelector:{"name":"snapshot-1-97893a05-77074ba4","isGroup":false{"name":"backup-1-954b3c8c-59467025","isGroup":false}]'
```

When create new PVC.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ index de77b7246..ac6263ac5 100644
**And** Wait 1~2 hours for collection data to send to the influxDB database.

**Then** the value of field `longhorn_volume_average_size_bytes` in the influxdb should equal to the average size of all v1 volumes (excluding v2 volumes).
**And** the value of field `longhorn_volume_average_actual_size_bytes` in the influxdb should be equal or simular to the average actual size of all v1 volumes (excluding v2 volumes).
**And** the value of field `longhorn_volume_average_actual_size_bytes` in the influxdb should be equal or similar to the average actual size of all v1 volumes (excluding v2 volumes).
> It's OK for the actual size to be slightly off due to ongoing workload activities, such as data writing by the upgrade-responder.
```bash
# Get the sizes in the influxdb.
Expand Down
23 changes: 23 additions & 0 deletions e2e/keywords/backup.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
*** Settings ***
Documentation Backup Keywords
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/backup_keywords.py

*** Keywords ***
Create backup ${backup_id} for volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
create_backup ${volume_name} ${backup_id}

Verify backup list contains no error for volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
verify_no_error ${volume_name}

Verify backup list contains backup ${backup_id} of volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
${backup} = get_backup ${volume_name} ${backup_id}
Should Not Be Equal ${backup} ${None}

Delete backup volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
delete_backup_volume ${volume_name}
14 changes: 14 additions & 0 deletions e2e/keywords/backupstore.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
*** Settings ***
Documentation Backup Store Keywords
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/backupstore_keywords.py

*** Keywords ***
Place file ${file_name} into the backups folder of volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
create_file_in_backups_folder ${volume_name} ${file_name}

Delete file ${file_name} in the backups folder of volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
delete_file_in_backups_folder ${volume_name} ${file_name}
7 changes: 5 additions & 2 deletions e2e/keywords/common.resource
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,12 @@ Library ../libs/keywords/volume_keywords.py
Library ../libs/keywords/workload_keywords.py
Library ../libs/keywords/persistentvolumeclaim_keywords.py
Library ../libs/keywords/network_keywords.py
Library ../libs/keywords/backupstore_keywords.py
Library ../libs/keywords/storageclass_keywords.py
Library ../libs/keywords/node_keywords.py
Library ../libs/keywords/backing_image_keywords.py
Library ../libs/keywords/setting_keywords.py
Library ../libs/keywords/backupstore_keywords.py
Library ../libs/keywords/backup_keywords.py

*** Keywords ***
Set test environment
Expand All @@ -34,9 +36,10 @@ Cleanup test resources
cleanup_persistentvolumeclaims
cleanup_volumes
cleanup_storageclasses
cleanup_backupstore
cleanup_backups
cleanup_disks
cleanup_backing_images
reset_backupstore

Cleanup test resources include off nodes
Power on off node
Expand Down
1 change: 1 addition & 0 deletions e2e/libs/backup/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from backup.backup import Backup
48 changes: 48 additions & 0 deletions e2e/libs/backup/backup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from backup.base import Base
from backup.crd import CRD
from backup.rest import Rest
from strategy import LonghornOperationStrategy
from utility.utility import logging


class Backup(Base):

_strategy = LonghornOperationStrategy.REST

def __init__(self):
if self._strategy == LonghornOperationStrategy.CRD:
self.backup = CRD()
else:
self.backup = Rest()

def create(self, volume_name, backup_id):
return self.backup.create(volume_name, backup_id)

def get(self, volume_name, backup_id):
return NotImplemented

def get_backup_volume(self, volume_name):
return self.backup.get_backup_volume(volume_name)

def list(self, volume_name):
return self.backup.list(volume_name)

def verify_no_error(self, volume_name):
backup_volume = self.get_backup_volume(volume_name)
assert not backup_volume['messages'], \
f"expect backup volume {volume_name} has no error, but it's {backup_volume['messages']}"

def delete(self, volume_name, backup_id):
return NotImplemented

def delete_backup_volume(self, volume_name):
return self.backup.delete_backup_volume(volume_name)

def restore(self, volume_name, backup_id):
return NotImplemented

def cleanup_backup_volumes(self):
return self.backup.cleanup_backup_volumes()

def cleanup_system_backups(self):
return self.backup.cleanup_system_backups()
70 changes: 70 additions & 0 deletions e2e/libs/backup/base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
from abc import ABC, abstractmethod
from utility.utility import set_annotation
from utility.utility import get_annotation_value

class Base(ABC):

ANNOT_ID = "test.longhorn.io/backup-id"

@abstractmethod
def create(self, volume_name, backup_id):
return NotImplemented

def set_backup_id(self, backup_name, backup_id):
set_annotation(
group="longhorn.io",
version="v1beta2",
namespace="longhorn-system",
plural="backups",
name=backup_name,
annotation_key=self.ANNOT_ID,
annotation_value=backup_id
)

def get_backup_id(self, backup_name):
return get_annotation_value(
group="longhorn.io",
version="v1beta2",
namespace="longhorn-system",
plural="backups",
name=backup_name,
annotation_key=self.ANNOT_ID
)

@abstractmethod
def get(self, volume_name, backup_id):
return NotImplemented

def get_by_snapshot(self, volume_name, snapshot_name):
return NotImplemented

@abstractmethod
def get_backup_volume(self, volume_name):
return NotImplemented

def wait_for_backup_completed(self, volume_name, snapshot_name):
return NotImplemented

@abstractmethod
def list(self, volume_name):
return NotImplemented

@abstractmethod
def delete(self, volume_name, backup_id):
return NotImplemented

@abstractmethod
def delete_backup_volume(self, volume_name):
return NotImplemented

@abstractmethod
def restore(self, volume_name, backup_id):
return NotImplemented

@abstractmethod
def cleanup_backup_volumes(self):
return NotImplemented

@abstractmethod
def cleanup_system_backups(self):
return NotImplemented
7 changes: 7 additions & 0 deletions e2e/libs/backup/crd.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from backup.base import Base


class CRD(Base):

def __init__(self):
pass
151 changes: 151 additions & 0 deletions e2e/libs/backup/rest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
from backup.base import Base
from utility.utility import logging
from utility.utility import get_longhorn_client
from utility.utility import get_retry_count_and_interval
from node_exec import NodeExec
from volume import Rest as RestVolume
from snapshot import Snapshot as RestSnapshot
import time


class Rest(Base):

def __init__(self):
self.longhorn_client = get_longhorn_client()
self.volume = RestVolume(NodeExec.get_instance())
self.snapshot = RestSnapshot()
self.retry_count, self.retry_interval = get_retry_count_and_interval()

def create(self, volume_name, backup_id):
# create snapshot
snapshot = self.snapshot.create(volume_name, backup_id)

volume = self.volume.get(volume_name)
volume.snapshotBackup(name=snapshot.name)
# after backup request we need to wait for completion of the backup
# since the backup.cfg will only be available once the backup operation
# has been completed
self.wait_for_backup_completed(volume_name, snapshot.name)

backup = self.get_by_snapshot(volume_name, snapshot.name)
volume = self.volume.get(volume_name)
assert volume.lastBackup == backup.name, \
f"expect volume lastBackup is {backup.name}, but it's {volume.lastBackup}"
assert volume.lastBackupAt != "", \
f"expect volume lastBackup is not empty, but it's {volume.lastBackupAt}"

self.set_backup_id(backup.name, backup_id)

return backup

def get(self, volume_name, backup_id):
backups = self.list(volume_name)
for backup in backups:
if self.get_backup_id(backup.name) == backup_id:
return backup
return None

def get_by_snapshot(self, volume_name, snapshot_name):
"""
look for a backup from snapshot on the backupstore
it's important to note that this can only be used for a completed backup
since the backup.cfg will only be written once a backup operation has
been completed successfully
"""
backup_volume = self.get_backup_volume(volume_name)
for i in range(self.retry_count):
logging(f"Trying to get backup from volume {volume_name} snapshot {snapshot_name} ... ({i})")
backups = backup_volume.backupList().data
for backup in backups:
if backup.snapshotName == snapshot_name:
return backup
time.sleep(self.retry_interval)
assert False, f"Failed to find backup from volume {volume_name} snapshot {snapshot_name}"

def get_backup_volume(self, volume_name):
for i in range(self.retry_count):
logging(f"Trying to get backup volume {volume_name} ... ({i})")
backup_volumes = self.longhorn_client.list_backupVolume()
for backup_volume in backup_volumes:
if backup_volume.name == volume_name and backup_volume.created != "":
return backup_volume
time.sleep(self.retry_interval)
assert False, f"Failed to find backup volume for volume {volume_name}"

def wait_for_backup_completed(self, volume_name, snapshot_name):
completed = False
for i in range(self.retry_count):
logging(f"Waiting for backup from volume {volume_name} snapshot {snapshot_name} completed ... ({i})")
volume = self.volume.get(volume_name)
for backup in volume.backupStatus:
if backup.snapshot != snapshot_name:
continue
elif backup.state == "Completed":
assert backup.progress == 100 and backup.error == "", f"backup = {backup}"
completed = True
break
if completed:
break
time.sleep(self.retry_interval)
assert completed, f"Expected from volume {volume_name} snapshot {snapshot_name} completed, but it's {volume}"

def list(self, volume_name):
backup_volume = self.get_backup_volume(volume_name)
return backup_volume.backupList().data

def delete(self, volume_name, backup_id):
return NotImplemented

def delete_backup_volume(self, volume_name):
bv = self.longhorn_client.by_id_backupVolume(volume_name)
self.longhorn_client.delete(bv)
self.wait_for_backup_volume_delete(volume_name)

def wait_for_backup_volume_delete(self, name):
retry_count, retry_interval = get_retry_count_and_interval()
for _ in range(retry_count):
bvs = self.longhorn_client.list_backupVolume()
found = False
for bv in bvs:
if bv.name == name:
found = True
break
if not found:
break
time.sleep(retry_interval)
assert not found

def restore(self, volume_name, backup_id):
return NotImplemented

def cleanup_backup_volumes(self):
backup_volumes = self.longhorn_client.list_backup_volume()

# we delete the whole backup volume, which skips block gc
for backup_volume in backup_volumes:
self.delete_backup_volume(backup_volume.name)

backup_volumes = self.longhorn_client.list_backup_volume()
assert backup_volumes.data == []

def cleanup_system_backups(self):

system_backups = self.longhorn_client.list_system_backup()
for system_backup in system_backups:
# ignore the error when clean up
try:
self.longhorn_client.delete(system_backup)
except Exception as e:
name = system_backup['name']
print("\nException when cleanup system backup ", name)
print(e)

ok = False
retry_count, retry_interval = get_retry_count_and_interval()
for _ in range(retry_count):
system_backups = self.longhorn_client.list_system_backup()
if len(system_backups) == 0:
ok = True
break
time.sleep(retry_interval)
assert ok
2 changes: 1 addition & 1 deletion e2e/libs/backupstore/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
from backupstore.nfs import Nfs
from backupstore.minio import Minio
from backupstore.s3 import S3
Loading

0 comments on commit 1072de2

Please sign in to comment.