Skip to content

willcoliveira/test-task

Repository files navigation

Test task

This project contains the test task activity requested. It is using Cypress as it's testing tool. The API documentation is available here. So based on the requirements for UI and API Testing there are 2 ways of running tests using Cypress either using the UI or using the command line (script mode).

How to install this project

Node - Having node installed in your machine is the unique and mandatory pre-requisite to run and setup this project. So, in order to obtain npm and Node.js installed you can download and read its instructions in node.

After the completing node's installation. Open a terminal from your preference, select the current node version (for this check .nvmrc to see if the current one into your machine is == or >> v16 for example.

# Selecting the version of this project
nvm use
# Checking the version installed
npm -v

After the completed node's installation, you will need to clone this current repo.

# Git Link
git clone https://github.com/willcoliveira/test-task.git
git@github.com:willcoliveira/test-task.git

Installing the dependencies

After cloning the repository from github. Open the root folder of this project and install the dependecies from the package.json. This file contains all the necessary dev dependecies to run, validate, debug, install and setup this test automation project. You can use one of the following ways to install it:

yarn or
npm install

If it doesn't work for the first time, you can also re-install your yarn package locally and repeat the previous step.

npm install --global yarn

After performing the yarn, it is possible to see and check the following messages:

yarn install
info No lockfile found.
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
[3/4] 🔗  Linking dependencies...
[4/4] 🔨  Building fresh packages...
success Saved lockfile.
✨  Done in 19.38s.

After seeing the messages above you're ready to use this project.

How to Run the tests using Cypress Interface

In order to run the specs within the framework interface and having a way to debug the tests start Cypress using:

yarn cy:open

Cypress UI

Using the command above, Cypress interface will be opened and you can select the E2E project already configured for this task (at this moment this resolution is not covering the component testing feature). After selecting this option you can also select the preferred browser and the file that you want to run.

Cypress browser selection

Cypress test specs

Cypress Execution

Note: The UI tests are running against the base url provided on cypress.config.js.

In addition to this option you can also run the following scripts to run the UI tests and follow the UI execution from your terminal.

yarn test:ui

and for the API ones:

yarn test:api

How to run this project CLI

You can run all the tests using the cli it means that you will be able to run the tests using the headless mode of the selected browser. So you won't see any interactions however the tests will be running on the same way of the interface.

For UI:

yarn test:ui:headless:mocha

Cypress CLI Execution

For API:

yarn test:api:mocha

How to Run the tests using Docker

In order to execute the test specs using docker containers you can build a new image of the Dockerfile added into this project root folder and execute the tests based on your selection.

PS: If you don't have docker installed in your local, you can download and read its instructions in Docker.

In addition to that, the Makefile contains the scripts to build, run and clean the containers. Those instructions could be added as a part of the CI to extend the tests as a part of the regular deploys.

PS: If you don't have make installed in your local, please execute the docker commands directly in your cli. Otherwise you can run the following instructions.

Building a new docker image

make build

Building the test container

Executing all the tests when necessary

make run-tests

Running the UI tests

make run-ui-tests

Running the UI tests via container

Running API tests

make run-api-tests

For each execution you can cleanup the image and container generated using:

make clean

Cleaning up container data

For API and UI tests the reports generated at the end of the run are copied for your local and you can access then after the executions based on the URL provided.

✓ Reports saved:
../test-task/cypress/reports/output.html

How to analyse the Cypress Test Results

This project is using mochawesome plugin as its report generator. So once one of your executions is done with the commands below you will be able to access the test report for the same one inside the reports directory.

yarn test:ui:headless:mocha or yarn test:api:mocha

Generating the test Report

You will be able to see a full report on the link created as it is possible to see on the image above

/Users/williamoliveira/Documents/test-task/cypress/reports/output.html

Final Report

Test Report

Final Report containing failures

Also, If a test fails it will show the exactly step and a screenshot of that point. So with this support, you will be able to go to the test and analyse properly the results. Test Report generated with errors Test Report with information

Caveats and linters validations

When adding a new commit a linter validation will be running in order to check styles and pointing the necessary changes. If you would like to manually test your code against the rules you can run the following instructions.

Running the same pre commits scripts and checking the output

yarn lint

Fixing errors and making the code avaialble to be commited

yarn lint:fix

PART 1 – CREATE TEST PLAN

Please describe the approach you would use to test this application and write a test plan.

Don’t need to write tests for the verification flow itself (taking pictures, videos, selfies). Please also ignore QR code and mobile fallback parts.


In order to describe my approach for testing this test-task. I would like to set and define some points for functional and non-functional testing in a high level point of view and some of the aspects that were coveraged in the automation specs on this project.

OBJECTIVE

Describe a high level overview for functional and non-functional requirements.

Demo session configuration feature

Funtional testing checklist UI

  • content validation - including text data and translations
  • components and fields
  • mandatory fields
  • validate text data - full name - Valid input - Invalid input - Special Characters - Character Limit Min/Max - Acessibility via Keyboard - keys - Empty status - Name rules based on countries
  • validate dropdowns - session language - document country and type - Entries - length of languages, countries, documents - languages, countries, documents content - languages, countries, documents structure - empty status
  • validate Launch Veriff radio button options - default - empty status - content - selection
  • Overall page content within strings and texts
  • Happy paths for veriff me process based on all inputs
  • Same approach for Let's get you verified screen and maybe a visual regression could be a huge plus based on the number of visual differences between languanges and documents

API

  • validate api schema for endpoints
  • registration and sessions
  • responses and errors for most part of the endponts adding valid, invalid, bad data - Approved - Declined - Resubmission Requested - Expired - Abandoned
  • checking the body responses for those endpoints and validating their content
  • submit session creation - happy path and bad data
  • composition of the session ids
  • tokens generation
  • callback urls
  • Session data saving and
  • disrupted sessions - logout situation by missing the sessions
  • additonal information for sessions
  • request body and your API secret concatenation for X-SIGNATUREs

Non-funtional testing checklist

  • API key should not be accepted with wrong content from callbacks
  • mismatched X-SIGNATURE should not be available to prooced on the process
  • invalid JSON schemas should be blocked in order to avoid some pages and server crashes
  • Multi requests at once to evaluate the api's performance
  • Malicious content as input for endpoints
  • Via UI play around some endpoints trying to simulate some pen testing and malicious data
  • Time responses for basic checks and endpoints

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •