Skip to content

Commit

Permalink
Merge pull request #147 from flask-dashboard/development
Browse files Browse the repository at this point in the history
Development
  • Loading branch information
bogdanp05 committed May 23, 2018
2 parents f85b17d + 4a9912f commit 2ac73bc
Show file tree
Hide file tree
Showing 56 changed files with 1,561 additions and 1,358 deletions.
8 changes: 8 additions & 0 deletions docs/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,14 @@ All notable changes to this project will be documented in this file.
This project adheres to `Semantic Versioning <http://semver.org/>`_.
Please note that the changes before version 1.10.0 have not been documented.

Unreleased
----------
Changed

- Restructuring of Test-Monitoring page

- Identify testRun by Travis build number


v1.13.0
----------
Expand Down
6 changes: 5 additions & 1 deletion docs/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,8 @@ contains the entry point of the app. The following things can be configured:
GUEST_PASSWORD=['dashboardguest!', 'second_pw!']
GIT=/<path to your project>/.git/
OUTLIER_DETECTION_CONSTANT=2.5
DASHBOARD_ENABLED=True
OUTLIERS_ENABLED=True
SECURITY_TOKEN='cc83733cb0af8b884ff6577086b87909'
TEST_DIR=/<path to your project>/tests/
COLORS={'main':'[0,97,255]',
'static':'[255,153,0]'}
Expand Down Expand Up @@ -109,6 +110,9 @@ This might look a bit overwhelming, but the following list explains everything i
the expected overhead of the Dashboard is a bit larger, as you can find
`here <https://github.com/flask-dashboard/Testing-Dashboard-Overhead>`_.

- **SECURITY_TOKEN:** The token that is used for exporting the data to other services. If you leave this unchanged,
any service is able to retrieve the data from the database.

- **TEST_DIR:** Specifies where the unit tests reside. This will show up in the configuration in the Dashboard.

- **COLORS:** The endpoints are automatically hashed into a color.
Expand Down
31 changes: 21 additions & 10 deletions docs/functionality.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,19 +113,22 @@ Using the collected data, a number of observations can be made:

- Do users experience different execution times in different version of the application?

Test-Coverage Monitoring
Monitoring Unit Test Performance
------------------------
To enable Travis to run your unit tests and send the results to the Dashboard, two steps have to be taken:
In addition to monitoring the performance of a live deployed version of some web service,
the performance of such a web service can also be monitored by making use of its unit tests.
This of course assumes that several unit tests were written for the web service project it concerns.
Also, since this monitoring should be done in an automated way, a Travis setup for the project is a prerequisite.

1. The installation requirement for the Dashboard has to be added to the `setup.py` file of your app:
To enable Travis to run your unit tests and send the obtained results to the Dashboard, two steps have to be taken:

.. code-block:: python
1. In the `setup.py` file of your web service, the Dashboard has to be added as a requirement:

dependency_links=["https://github.com/flask-dashboard/Flask-MonitoringDashboard/tarball/master#egg=flask_monitoringdashboard"]
.. code-block:: python
install_requires=('flask_monitoringdashboard')
2. In your `.travis.yml` file, one script command should be added:
2. In the `.travis.yml` file, a script command has to be added:

.. code-block:: bash
Expand All @@ -134,10 +137,18 @@ To enable Travis to run your unit tests and send the results to the Dashboard, t
--times=5 \
--url=https://yourdomain.org/dashboard
The `test_folder` argument specifies where the performance collection process can find the unit tests to use.
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
The `url` argument (optional) specifies where the Dashboard is that needs to receive the performance results.
When the last argument is omitted, the performance testing will run, but without publishing the results.
The `test_folder` argument (optional, default: ./) specifies where the performance collection process can find
the unit tests to use. When omitted, the current working directory is used.
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
The `url` argument (optional) specifies where the Dashboard is that needs to receive the performance results.
When the last argument is omitted, the performance testing will run, but without publishing the results.

Now Travis will monitor the performance of the unit tests automatically after every commit that is made.
These results will then show up in the Dashboard, under 'Testmonitor'.
Here, all tests that have been run will show up, along with the endpoints of the web service that they test.
Visualizations of the performance evolution of the unit tests are also available here.
This will give the developer of the web service insight in the expected performance change when the new version of the
web service should be deployed.

Outliers
--------
Expand Down
26 changes: 20 additions & 6 deletions flask_monitoringdashboard/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,18 +48,32 @@ def bind(app):

import os
# Only initialize unit test logging when running on Travis.
if '/home/travis/build/' in os.getcwd():
print('Detected running on Travis.')
if 'TRAVIS' in os.environ:
import datetime
from flask import request

@user_app.before_first_request
def log_current_version():
"""
Logs the version of the user app that is currently being tested.
:return:
"""
home = os.path.expanduser("~")
with open(home + '/app_version.log', 'w') as log:
log.write(config.version)

@user_app.after_request
def after_request(response):
def log_endpoint_hit(response):
"""
Add log_endpoint_hit as after_request function that logs the endpoint hits.
:param response: the response object that the actual endpoint returns
:return: the unchanged response of the original endpoint
"""
hit_time_stamp = str(datetime.datetime.utcnow())
home = os.path.expanduser("~")
log = open(home + '/endpoint_hits.log', 'a')
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))
log.close()
with open(home + '/endpoint_hits.log', 'a') as log:
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))

return response

# Add all route-functions to the blueprint
Expand Down
66 changes: 42 additions & 24 deletions flask_monitoringdashboard/collect_performance.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,36 @@
import argparse
import csv
import os
import datetime
import os
import time
from unittest import TestLoader

import requests

# Parsing the arguments.
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
parser.add_argument('--test_folder', dest='test_folder', required=True,
help='folder in which the unit tests can be found (example: ./tests)')
parser.add_argument('--times', dest='times', default=5,
help='number of times to execute every unit test (default: 5)')
parser.add_argument('--url', dest='url', default=None,
help='url of the Dashboard to submit the performance results to')
args = parser.parse_args()
# Determine if this script was called normally or if the call was part of a unit test on Travis.
# When unit testing, only run one dummy test from the testmonitor folder and submit to a dummy url.
test_folder = os.getcwd() + '/flask_monitoringdashboard/test/views/testmonitor'
times = '1'
url = 'https://httpbin.org/post'
if 'flask-dashboard/Flask-MonitoringDashboard' not in os.getenv('TRAVIS_BUILD_DIR'):
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
parser.add_argument('--test_folder', dest='test_folder', default='./',
help='folder in which the unit tests can be found (default: ./)')
parser.add_argument('--times', dest='times', default=5,
help='number of times to execute every unit test (default: 5)')
parser.add_argument('--url', dest='url', default=None,
help='url of the Dashboard to submit the performance results to')
args = parser.parse_args()
test_folder = args.test_folder
times = args.times
url = args.url

# Show the settings with which this script will run.
print('Starting the collection of performance results with the following settings:')
print(' - folder containing unit tests: ', args.test_folder)
print(' - number of times to run tests: ', args.times)
print(' - url to submit the results to: ', args.url)
if not args.url:
print(' - folder containing unit tests: ', test_folder)
print(' - number of times to run tests: ', times)
print(' - url to submit the results to: ', url)
if not url:
print('The performance results will not be submitted.')

# Initialize result dictionary and logs.
Expand All @@ -34,8 +44,8 @@

# Find the tests and execute them the specified number of times.
# Add the performance results to the result dictionary.
suites = TestLoader().discover(args.test_folder, pattern="*test*.py")
for iteration in range(int(args.times)):
suites = TestLoader().discover(test_folder, pattern="*test*.py")
for iteration in range(int(times)):
for suite in suites:
for case in suite:
for test in case:
Expand All @@ -49,7 +59,7 @@
execution_time = (time_after - time_before) * 1000
data['test_runs'].append(
{'name': str(test), 'exec_time': execution_time, 'time': str(datetime.datetime.utcnow()),
'successful': test_result.wasSuccessful(), 'iter': iteration + 1})
'successful': (test_result.wasSuccessful() if test_result else False), 'iter': iteration + 1})
log.close()

# Read and parse the log containing the test runs into an array for processing.
Expand Down Expand Up @@ -78,14 +88,22 @@
data['grouped_tests'].append({'endpoint': endpoint_hit[1], 'test_name': test_run[2]})
break

# Retrieve the current version of the user app that is being tested.
with open(home + '/app_version.log', 'r') as log:
data['app_version'] = log.read()

# Add the current Travis Build Job number.
data['travis_job'] = os.getenv('TRAVIS_JOB_NUMBER')

# Send test results and endpoint_name/test_name combinations to the Dashboard if specified.
if args.url:
if args.url[-1] == '/':
args.url += 'submit-test-results'
else:
args.url += '/submit-test-results'
if url:
if 'flask-dashboard/Flask-MonitoringDashboard' not in os.getenv('TRAVIS_BUILD_DIR'):
if url[-1] == '/':
url += 'submit-test-results'
else:
url += '/submit-test-results'
try:
requests.post(args.url, json=data)
print('Sent unit test results to the Dashboard at ', args.url)
requests.post(url, json=data)
print('Sent unit test results to the Dashboard at', url)
except Exception as e:
print('Sending unit test results to the dashboard failed:\n{}'.format(e))
2 changes: 1 addition & 1 deletion flask_monitoringdashboard/constants.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": "1.13.0",
"version": "1.13.1",
"author": "Patrick Vogel, Thijs Klooster & Bogdan Petre",
"email": "patrickvogel@live.nl"
}
8 changes: 2 additions & 6 deletions flask_monitoringdashboard/core/forms/__init__.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,13 @@
from flask_wtf import FlaskForm
from wtforms import validators, SubmitField, PasswordField, StringField

from .daterange import get_daterange_form
from .slider import get_slider_form
from .double_slider import get_double_slider_form
from .slider import get_slider_form


class Login(FlaskForm):
""" Used for serving a login form. """
name = StringField('Username', [validators.data_required()])
password = PasswordField('Password', [validators.data_required()])
submit = SubmitField('Login')


class RunTests(FlaskForm):
""" Used for serving a login form on /{{ link }}/testmonitor. """
submit = SubmitField('Run selected tests')
4 changes: 2 additions & 2 deletions flask_monitoringdashboard/core/forms/daterange.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ class SelectDateRangeForm(FlaskForm):
""" Used for selecting two dates, which together specify a range. """
start_date = DateField('Start date', format=DATE_FORMAT, validators=[validators.data_required()])
end_date = DateField('End date', format=DATE_FORMAT, validators=[validators.data_required()])
submit = SubmitField('Submit')
title = 'Select two dates for reducing the size of the graph'
submit = SubmitField('Update')
title = 'Select the time interval'

def get_days(self):
"""
Expand Down
15 changes: 10 additions & 5 deletions flask_monitoringdashboard/core/forms/double_slider.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,16 @@
from wtforms import SubmitField
from wtforms.fields.html5 import IntegerRangeField

from flask_monitoringdashboard.core.forms.slider import DEFAULT_SLIDER_VALUE


class DoubleSliderForm(FlaskForm):
"""
Class for generating a slider that can be used to reduce the graph.
"""
slider0 = IntegerRangeField()
slider1 = IntegerRangeField()
submit = SubmitField('Submit')
submit = SubmitField('Update')
title = 'Select two numbers below for reducing the size of the graph'
subtitle = ['Subtitle0', 'Subtitle1']

Expand Down Expand Up @@ -52,20 +54,23 @@ def content(self):
self.submit(class_="btn btn-primary btn-block"))


def get_double_slider_form(slider_max=[100, 100], subtitle=None):
def get_double_slider_form(slider_max=(100, 100), title=None, subtitle=None):
"""
Return a SliderForm with the range from 0 to slider_max
:param slider_max: maximum value for the slider
:param title: override the default title
:param subtitle: override the default titles of the 2 sliders
:return: a SliderForm with the range (0 ... slider_max)
"""
form = DoubleSliderForm(request.form)
form.min_value = [1, 1]
form.min_value = (1, 1)
form.max_value = slider_max
if title:
form.title = title
if subtitle:
form.subtitle = subtitle
if 'slider0' in request.form:
form.start_value = [request.form['slider0'], request.form['slider1']]
else:
form.start_value = [min(max(form.min_value[i], form.min_value[i] + (slider_max[i] - form.min_value[i]) // 2),
form.max_value[i]) for i in range(2)]
form.start_value = [min(DEFAULT_SLIDER_VALUE, form.max_value[i]) for i in range(2)]
return form
11 changes: 8 additions & 3 deletions flask_monitoringdashboard/core/forms/slider.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,15 @@
from wtforms import SubmitField
from wtforms.fields.html5 import IntegerRangeField

DEFAULT_SLIDER_VALUE = 10


class SliderForm(FlaskForm):
"""
Class for generating a slider that can be used to reduce the graph.
"""
slider = IntegerRangeField()
submit = SubmitField('Submit')
submit = SubmitField('Update')
title = 'Select a number below for reducing the size of the graph'

def get_slider_value(self):
Expand All @@ -36,17 +38,20 @@ def content(self):
self.submit(class_="btn btn-primary btn-block"))


def get_slider_form(slider_max=100):
def get_slider_form(slider_max=100, title=None):
"""
Return a SliderForm with the range from 0 to slider_max
:param slider_max: maximum value for the slider
:param title: override the default title
:return: a SliderForm with the range (0 ... slider_max)
"""
form = SliderForm(request.form)
form.min_value = 1
form.max_value = slider_max
if title:
form.title = title
if 'slider' in request.form:
form.start_value = request.form['slider']
else:
form.start_value = min(max(form.min_value, form.min_value + (slider_max - form.min_value) // 2), form.max_value)
form.start_value = min(DEFAULT_SLIDER_VALUE, slider_max)
return form
2 changes: 2 additions & 0 deletions flask_monitoringdashboard/core/plot/plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ def boxplot(values, **kwargs):
"""
if 'name' in kwargs.keys():
kwargs = add_default_value('marker', {'color': get_color(kwargs.get('name', ''))}, **kwargs)
if 'label' in kwargs.keys():
kwargs = add_default_value('name', kwargs.get('label', ''))
kwargs = add_default_value('x', value=values, **kwargs)
return go.Box(**kwargs)

Expand Down
5 changes: 3 additions & 2 deletions flask_monitoringdashboard/core/rules.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
def get_rules():
def get_rules(end=None):
"""
:param end: if specified, only return the available rules to that endpoint
:return: A list of the current rules in the attached Flask app
"""
from flask_monitoringdashboard import config, user_app

rules = user_app.url_map.iter_rules()
rules = user_app.url_map.iter_rules(endpoint=end)
return [r for r in rules if not r.rule.startswith('/' + config.link)
and not r.rule.startswith('/static-' + config.link)]
3 changes: 3 additions & 0 deletions flask_monitoringdashboard/core/utils.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import ast

import numpy as np
from flask import url_for
from werkzeug.routing import BuildError

from flask_monitoringdashboard import config
from flask_monitoringdashboard.core.rules import get_rules
from flask_monitoringdashboard.database.count import count_requests, count_total_requests
from flask_monitoringdashboard.database.endpoint import get_monitor_rule
from flask_monitoringdashboard.database.function_calls import get_date_of_first_request
Expand All @@ -13,6 +15,7 @@ def get_endpoint_details(db_session, endpoint):
""" Return details about an endpoint"""
return {
'endpoint': endpoint,
'rules': [r.rule for r in get_rules(endpoint)],
'rule': get_monitor_rule(db_session, endpoint),
'url': get_url(endpoint),
'total_hits': count_requests(db_session, endpoint)
Expand Down

0 comments on commit 2ac73bc

Please sign in to comment.