- Working Jira Service Management of supported version (toolkit README for a list of supported JSM versions) with agents, customers, service desks, requests, organizations, etc.
- Client machine with 4 CPUs and 16 GBs of RAM to run the Toolkit.
- Virtual environment with Python and bzt installed. See the root toolkit README file for more details.
If you need performance testing results at a production level, follow instructions described in the official User Guide to set up Jira Service Management DC with the corresponding dataset. For spiking, testing, or developing, your local Jira Service Management instance would work well.
application_hostname
: test jsm hostname (without http).application_protocol
: http or https.application_port
: 80 (for http) or 443 (for https), 8080, 2990 or your instance-specific port.secure
: True or False. Default value is True. Set False to allow insecure connections, e.g. when using self-signed SSL certificate.application_postfix
: it is empty by default; e.g., /jira for url like this http://localhost:2990/jira.admin_login
: jira admin user name (after restoring dataset from SQL dump, the admin user name is: admin).admin_password
: jira admin user password (after restoring dataset from SQL dump, the admin user password is: admin) .load_executor
: executor for load tests. Valid options are jmeter (default) or locust.concurrency_agents
:50
- number of concurrent agents for JMeter agents scenario.concurrency_customers
:150
- number of concurrent customers for JMeter customers scenario.test_duration
:45m
- duration of test execution.ramp-up
:3m
- amount of time it will take JMeter or Locust to add all test users to test execution.total_actions_per_hour
:54500
- number of total JMeter/Locust actions per hour.WEBDRIVER_VISIBLE
: visibility of Chrome browser during selenium execution (False is by default).
Run Taurus.
bzt jsm.yml
Results are located in the resutls/jsm/YY-MM-DD-hh-mm-ss
directory:
bzt.log
- log of bzt runerror_artifacts
- folder with screenshots and HTMLs of Selenium failsjmeter.err
- JMeter errors loglocust.err
- Locust errors logkpi.jtl
- JMeter raw datapytest.out
- detailed log of Selenium execution, including stacktraces of Selenium failsselenium.jtl
- Selenium raw dataresults.csv
- consolidated results of executionresutls_summary.log
- detailed summary of the run. Make sure that overall run status isOK
before moving to the next steps. Note: There are separate agents and customers scenarios for JMeter/Locust/Selenium scripts. So there are two sets of above logs files.
The jsm.yml has a action_name
fields in env
section with percentage for each action. You can change values from 0 to 100 to increase/decrease execution frequency of certain actions.
The percentages must add up to 100, if you want to ensure the performance script maintains
throughput defined in total_actions_per_hr
.
For full-scale results generation use defaults values for concurrency, test_duration, total_actions_per_hour and ramp-up.
For app-specific actions development and testing it's ok to reduce concurrency, test_duration, total_actions_per_hour and ramp-up.
- Open JMeter UI as described in README.md.
- On the
View Results Tree
controller, click theBrowse
button and openerror.jtl
fromapp/results/jsm/YY-MM-DD-hh-mm-ss
folder.
From this view, you can click on any failed action and see the request and response data in appropriate tabs.
- Follow steps described in README.md.
- In jsm.yml, set percentage
desired_action
to 100 and all other percentages to 0. - Run
bzt jsm.yml
.
Detailed log of Locust executor is located in the results/jsm/YY-MM-DD-hh-mm-ss/locust.log
file. Locust errors and stacktrace are located in the results/jsm/YY-MM-DD-hh-mm-ss/locust*.err
file.
Additional debug information could be enabled by setting verbose
flag to true
in jsm.yml
configuration file. To add log message use logger.locust_info('your INFO message')
string in the code.
- Activate virualenv for the Performance Toolkit.
- Navigate to
app
directory and execute commandlocust --locustfile locustio/jsm/agents_locustfile.py
. - Open your browser, navigate to
localhost:8089
. - Enter
Number of total users to simulate
(1
is recommended value for debug purpose) - Enter
Hatch rate (users spawned/secods)
- Press
Start spawning
button.
- Activate virualenv for the Performance Toolkit.
- Navigate to
app
and execute commandlocust --headless --locustfile locustio/jsm/agents_locustfile.py --users N --spawn-rate R
, whereN
is the number of total users to simulate andR
is the spawn rate.
Full logs of local run you can find in the results/jsm/YY-MM-DD-hh-mm-ss_local/
directory.
To execute one locust action, navigate to jsm.yml
and set percentage value 100
to the action you would like to run separately, set percentage value 0
to all other actions.
Detailed log and stacktrace of Selenium PyTest fails are located in the results/jsm/YY-MM-DD-hh-mm-ss/pytest.out
file.
Also, screenshots and HTMLs of Selenium fails are stared in the results/jsm/YY-MM-DD-hh-mm-ss/error_artifacts
folder.
In jsm.yml file, set the WEBDRIVER_VISIBLE: True
.
- Activate virualenv for the Performance Toolkit.
- Navigate to the selenium folder using the
cd app/selenium_ui
command. - In jsm.yml file, set the
WEBDRIVER_VISIBLE: True
. - Run all Selenium PyTest tests with the
pytest jsm_ui_agents.py
orpytest jsm_ui_customers.py
command. - To run one Selenium PyTest test (e.g.,
test_1_selenium_agent_browse_service_desk_projects_list
), execute the first login test and the required one with this command:
pytest jsm_ui_agents.py::test_0_selenium_agent_a_login jsm_ui_agents.py::test_1_selenium_agent_browse_service_desk_projects_list
.
Navigate to the reports_generation
folder and follow README.md instructions to generate side-by-side comparison charts.
- Activate virualenv for the Performance Toolkit.
- Navigate to the
app
folder. - Set
PYTHONPATH
g as full path toapp
folder with command:export PYTHONPATH=`pwd` # for mac or linux set PYTHONPATH=%cd% # for windows
- Run prepare data script:
python util/data_preparation/jsm_prepare_data.py