Skip to content

A starter project to help creating a new Locust load test project, using this as base instead of creating it from zero. It gives the ability to map all the service endpoints using the open api v3 yaml file.

License

Notifications You must be signed in to change notification settings

alexandrepasc/locust-load-starter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

locust-load-starter

Introduction

This repository aims to ease the creation of a load testing project by mapping the api automatically from the OpenAPI v3 yaml or json file.

Depending on the tests that will be done, the mapping of multiple requests can be cumbersome, reading the API documentation and mapping the required endpoints and parameters to the test project. The main idea here was to reduce that step and be able o ramp up the test creation.

This project is developed to use the Locust load testing framework, and all that will be generated by the script will use that framework. This can be used as the base for a new performance testing project by just copying the content of this repository to the new repository base folder since this is not done with a product in mind.

Another functionality that this has is to generate a Locust task file and function using a har file. This is useful when the performance test is targeting a service that has a web interface. So we can use a web browser and create the flow that we need to use in the test, export the dev tools network as a har file, and use that file to generate the task. This will use the har file requests and the endpoints files generated by the other script.

Getting Started

To use this project install Python and pip. After that install the requirements using pip and the requirements.txt file (pip install -r requirements.txt).

After this we are set to go.

Build and Test

Import Endpoints

To use this as a base for a new load testing project get the spec file (.yml, .yaml or json) from the API documentation, if using the Swagger template the schema url and/or a button to access the schema should be available.

Then execute the Python script scripts/generate_endpoints.py with the -f argument with the full path to the downloaded file. The script will create in the root of the project the endpoints folder and in it build the sub-folder with the files mapping the information retrieved in the spec file.

  • python scripts/generate_endpoints.py -h
  • python scripts/generate_endpoints.py -f /full/path/doc.yml

The sub-folders created will use the same structure as the API path, this was the easy way to organize the files and not keep all of them in the same folder.

For each endpoint it will create a sub-folder, in it a file with the name given in the spec, in the file a variable with the endpoint and a class. The class will have functions with all the API methods, in them the locust implementation and the returning response.

ENDPOINT = "/pets"


#  This class contains a list of the functions for this endpoint with all the
#  requests that were exported from the swagger spec file. To use this you
#  need to instantiate the class and pass the locust into it and after that we
#  are able to start to build the tests.
class Pets:
    #  Initialize the class with the locust instance.
    def __init__(self, loc: locust.user.users):
        self.loc = loc

    def get_listPets(self,
              params: dict[str, any] = None,
              headers: dict[str, any] = None,
              cookies: CookieJar = None,
              auth: tuple[str, str] = None,
              json: any = None,
              data: dict[str, any] = None,
              files: dict[str, any] = None,
              redirect: bool = False,
              verify: bool = True):

        response = self.loc.client.get(
            ENDPOINT,
            params=params,
            headers=headers,
            cookies=cookies,
            auth=auth,
            json=json,
            data=data,
            files=files,
            name="get_listPets",
            allow_redirects=redirect,
            verify=verify
        )

        return response

In case the endpoint method has parameter the params argument to the python method will be added as a file listing the parameters for that endpoint method.

#  List of all the parameters exposed in the swagger file, with a comment
#  referring the type, that can be used when creating the test requests. The
#  name of this file is linked with a request function in the class of
#  this folder.
limit = "limit"  # integer

With this set we can create the locust tasks in a separated file and folder, instantiating the needed class and using the methods, with all the necessary arguments, to do the calls and build the load test(s).

Depending on the test(s) that will be used we could create a folder in the project (like tasks), in it the tasks using the generated methods and in the root project folder the test file calling the tasks created.

  • /tasks/example_task_1.py
from locust import TaskSet, task

from endpoints.pets.pets import Pets


class GetPetsList(TaskSet):
    @task
    def get_pets_list(self):
        pets: Pets = Pets(self)

        pets.get_listPets()
  • /tasks/example_task_2.py
from locust import TaskSet, task

from endpoints.pets.pets import Pets


class GetPetDetail(TaskSet):
    @task
    def get_pets_list(self):
        pets: Pets = Pets(self)

        pets.get_showPetById_id(id="pet_id")
  • /example_test.py
from locust import HttpUser

from tasks.example_task_1 import GetPetsList
from tasks.example_task_2 import GetPetDetail


class LoadTest(HttpUser):

    tasks = {
        GetPetsList: 1,
        GetPetDetail: 2
    }

Import Flow

To create a task with a sequence of api calls, open the dev tools of a web browser, do the user actions in the web page that you want to replicate in the test. Then open the network tab of the dev tools and export it in the HTTP Archive format (aka har).

After importing the endpoints with the previous script execute the scripts/generate_flow.py with the flag -f and the full path to the .har file, and the -e with the full path of the endpoints folder created by the generate_endpoints.py script. Both the flags are required.

  • python scripts/generate_flow.py -h
  • python scripts/generate_flow.py -f /path/to/the/file.har -e /path/to/the/endpoints/folder

The script will create a folder called tasks, in case it doesn't exist, in the parent folder of the script folder (../tasks). In that folder it will create a python file with named flow_ and a sequence number, this enables the user to import multiple flows without the need of re-naming the files. The created file will have a class with the Locust TaskSet and a function with the Locust tag task.

In the task function for each api call in the .har file the correct endpoint function invocation will be set. In case the call has a request body it will be added, as in the case of parameters. All the necessary imports will be created to this file.

The generated file has the same structure of the example tasks in that the project has and that are described in the previous section of this document.

PS:

  • At the moment the requests headers are not added to the flow as any cookies, since both could cary any authorization token that should expire setting the test un-usable.
  • There might be some issues creating the flow caused by wrongly implementation from the web interface and/or the OpenAPI file being outdated, so bear in mind open the created file to be sure that it is ok.
  • In case the flow has any authentication, the logic to get the authorization tokens and the use of them in the api calls need to be created manually, there is no automation regarding this.

Execute Locust Tests

There is another script in this repository called run_tests.py, this was created to be able to execute unattended tests and be able to have the reports with some important information to help creating a manual report after. In cases that there is a necessity to execute multiple tests in a row and after the execution a manual report needs to be created with the summary of all the tests, the timestamp, the users, time,... and all the locust reports are important. With this script all of this information will be in the html and csv files name automatically, without any need to set that when starting the test. This script will create the reports folder in the location where the script is stored.

  • python run_tests.py -h
  • python run_tests.py -u 50 -r 5 -t 5m -H https://asd.com -f /path/example_test.py

This script has a limited number of flags and in some case more configurations need to be passed to the Locus execution. To give the ability to add more configurations to the test execution the tester can use the flag -e/--extra where the user is able to pass any of the un-supported flags directly to Locust. The value(s) passed to these flags needs to be enclosed in quotes.

  • python run_tests.py -e "--loglevel ERROR --logfile ./test.log"

Contribute

This was done taking in consideration some of the obstacles that were felt during the creation and execution of load tests in a given environment, it will not be a tool to rule them all. So new contributions will be gladly accepted from the community.

To contribute, report a bug, or propose an improvement use an Issue to expose it and generate a discussion. Describe the best the idea, link relevant information, and in case there is already a branch with the change link it in the Issue.

Hope this helps!!!

Appendix

Open API v3 yaml example

openapi: "3.0.0"
info:
  version: 1.0.0
  title: Swagger Petstore
  license:
    name: MIT
servers:
  - url: http://petstore.swagger.io/v1
paths:
  /pets:
    get:
      summary: List all pets
      operationId: listPets
      tags:
        - pets
      parameters:
        - name: limit
          in: query
          description: How many items to return at one time (max 100)
          required: false
          schema:
            type: integer
            maximum: 100
            format: int32
      responses:
        '200':
          description: A paged array of pets
          headers:
            x-next:
              description: A link to the next page of responses
              schema:
                type: string
          content:
            application/json:    
              schema:
                $ref: "#/components/schemas/Pets"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
    post:
      summary: Create a pet
      operationId: createPets
      tags:
        - pets
      responses:
        '201':
          description: Null response
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
  /pets/{petId}:
    get:
      summary: Info for a specific pet
      operationId: showPetById
      tags:
        - pets
      parameters:
        - name: petId
          in: path
          required: true
          description: The id of the pet to retrieve
          schema:
            type: string
      responses:
        '200':
          description: Expected response to a valid request
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Pet"
        default:
          description: unexpected error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
components:
  schemas:
    Pet:
      type: object
      required:
        - id
        - name
      properties:
        id:
          type: integer
          format: int64
        name:
          type: string
        tag:
          type: string
    Pets:
      type: array
      maxItems: 100
      items:
        $ref: "#/components/schemas/Pet"
    Error:
      type: object
      required:
        - code
        - message
      properties:
        code:
          type: integer
          format: int32
        message:
          type: string

Helping commands

Python linter

  • pycodestyle scripts/

Execute endpoints generator

  • python scripts/generate_endpoints.py -f "/home/path/Downloads/petstore.yaml"

Execute flow generator

  • python scripts/generate_flow.py -f /home/path/Downloads/petstore.har -e /home/path/locust-load-starter/endpoints

Execute runner

  • python run_tests.py -u 5 -r 5 -t 5m -f example_test.py -H https://domain.com

Locust execution

  • locust -t 1s -u 1 -f example_test.py --headless -H https://domain.com --loglevel DEBUG --logfile asd

About

A starter project to help creating a new Locust load test project, using this as base instead of creating it from zero. It gives the ability to map all the service endpoints using the open api v3 yaml file.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages