Skip to content

Files

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

User Guide for Jira Service Management

https://developer.atlassian.com/platform/marketplace/dc-apps-performance-toolkit-user-guide-jira-service-management/

Running tests

Pre-requisites

  • Working Jira Service Management of supported version (toolkit README for a list of supported JSM versions) with agents, customers, service desks, requests, organizations, etc.
  • Client machine with 4 CPUs and 16 GBs of RAM to run the Toolkit.
  • Virtual environment with Python and bzt installed. See the root toolkit README file for more details.

If you need performance testing results at a production level, follow instructions described in the official User Guide to set up Jira Service Management DC with the corresponding dataset. For spiking, testing, or developing, your local Jira Service Management instance would work well.

Step 1: Update jsm.yml

  • application_hostname: test jsm hostname (without http).
  • application_protocol: http or https.
  • application_port: 80 (for http) or 443 (for https), 8080, 2990 or your instance-specific port.
  • secure: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate.
  • application_postfix: it is empty by default; e.g., /jira for url like this http://localhost:2990/jira.
  • admin_login: jira admin user name (after restoring dataset from SQL dump, the admin user name is: admin).
  • admin_password: jira admin user password (after restoring dataset from SQL dump, the admin user password is: admin) .
  • load_executor: executor for load tests. Valid options are jmeter (default) or locust.
  • concurrency_agents: 50 - number of concurrent agents for JMeter agents scenario.
  • concurrency_customers: 150 - number of concurrent customers for JMeter customers scenario.
  • test_duration: 45m - duration of test execution.
  • ramp-up: 3m - amount of time it will take JMeter or Locust to add all test users to test execution.
  • total_actions_per_hour: 54500 - number of total JMeter/Locust actions per hour.
  • WEBDRIVER_VISIBLE: visibility of Chrome browser during selenium execution (False is by default).

Step 2: Run tests

Run Taurus.

bzt jsm.yml

Results

Results are located in the resutls/jsm/YY-MM-DD-hh-mm-ss directory:

  • bzt.log - log of bzt run
  • error_artifacts - folder with screenshots and HTMLs of Selenium fails
  • jmeter.err - JMeter errors log
  • locust.err - Locust errors log
  • kpi.jtl - JMeter raw data
  • pytest.out - detailed log of Selenium execution, including stacktraces of Selenium fails
  • selenium.jtl - Selenium raw data
  • results.csv - consolidated results of execution
  • resutls_summary.log - detailed summary of the run. Make sure that overall run status is OK before moving to the next steps. Note: There are separate agents and customers scenarios for JMeter/Locust/Selenium scripts. So there are two sets of above logs files.

Useful information

Changing performance workload for JMeter and Locust

The jsm.yml has a action_name fields in env section with percentage for each action. You can change values from 0 to 100 to increase/decrease execution frequency of certain actions. The percentages must add up to 100, if you want to ensure the performance script maintains throughput defined in total_actions_per_hr. For full-scale results generation use defaults values for concurrency, test_duration, total_actions_per_hour and ramp-up. For app-specific actions development and testing it's ok to reduce concurrency, test_duration, total_actions_per_hour and ramp-up.

JMeter

Debugging JMeter scripts

  1. Open JMeter UI as described in README.md.
  2. On the View Results Tree controller, click the Browse button and open error.jtl from app/results/jsm/YY-MM-DD-hh-mm-ss folder.

From this view, you can click on any failed action and see the request and response data in appropriate tabs.

Run one JMeter action

Option 1: Run one JMeter action via GUI

  1. Follow steps described in README.md.

Option 2: Run one JMeter action via bzt

  1. In jsm.yml, set percentage desired_action to 100 and all other percentages to 0.
  2. Run bzt jsm.yml.

Locust

Debugging Locust scripts

Detailed log of Locust executor is located in the results/jsm/YY-MM-DD-hh-mm-ss/locust.log file. Locust errors and stacktrace are located in the results/jsm/YY-MM-DD-hh-mm-ss/locust*.err file.

Additional debug information could be enabled by setting verbose flag to true in jsm.yml configuration file. To add log message use logger.locust_info('your INFO message') string in the code.

Running Locust tests locally without the Performance Toolkit

Start locust UI mode

  1. Activate virualenv for the Performance Toolkit.
  2. Navigate to app directory and execute command locust --locustfile locustio/jsm/agents_locustfile.py.
  3. Open your browser, navigate to localhost:8089.
  4. Enter Number of total users to simulate (1 is recommended value for debug purpose)
  5. Enter Hatch rate (users spawned/secods)
  6. Press Start spawning button.

Start Locust console mode

  1. Activate virualenv for the Performance Toolkit.
  2. Navigate to app and execute command locust --headless --locustfile locustio/jsm/agents_locustfile.py --users N --spawn-rate R, where N is the number of total users to simulate and R is the spawn rate.

Full logs of local run you can find in the results/jsm/YY-MM-DD-hh-mm-ss_local/ directory.

To execute one locust action, navigate to jsm.yml and set percentage value 100 to the action you would like to run separately, set percentage value 0 to all other actions.

Selenium

Debugging Selenium scripts

Detailed log and stacktrace of Selenium PyTest fails are located in the results/jsm/YY-MM-DD-hh-mm-ss/pytest.out file.

Also, screenshots and HTMLs of Selenium fails are stared in the results/jsm/YY-MM-DD-hh-mm-ss/error_artifacts folder.

Running Selenium tests with Browser GUI

In jsm.yml file, set the WEBDRIVER_VISIBLE: True.

Running Selenium tests locally without the Performance Toolkit

  1. Activate virualenv for the Performance Toolkit.
  2. Navigate to the selenium folder using the cd app/selenium_ui command.
  3. In jsm.yml file, set the WEBDRIVER_VISIBLE: True.
  4. Run all Selenium PyTest tests with the pytest jsm_ui_agents.py or pytest jsm_ui_customers.py command.
  5. To run one Selenium PyTest test (e.g., test_1_selenium_agent_browse_service_desk_projects_list), execute the first login test and the required one with this command:

pytest jsm_ui_agents.py::test_0_selenium_agent_a_login jsm_ui_agents.py::test_1_selenium_agent_browse_service_desk_projects_list.

Comparing different runs

Navigate to the reports_generation folder and follow README.md instructions to generate side-by-side comparison charts.

Run prepare data script locally

  1. Activate virualenv for the Performance Toolkit.
  2. Navigate to the app folder.
  3. Set PYTHONPATHg as full path to app folder with command:
    export PYTHONPATH=`pwd`    # for mac or linux
    set PYTHONPATH=%cd%        # for windows
  4. Run prepare data script:
    python util/data_preparation/jsm_prepare_data.py