Skip to content

Commit

Permalink
Merge pull request #145 from flask-dashboard/refactor_testmonitor
Browse files Browse the repository at this point in the history
Refactor Testmonitor
  • Loading branch information
FlyingBird95 committed May 23, 2018
2 parents 61a99ad + 64f78a8 commit 67d91b3
Show file tree
Hide file tree
Showing 41 changed files with 1,185 additions and 1,057 deletions.
31 changes: 21 additions & 10 deletions docs/functionality.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,19 +113,22 @@ Using the collected data, a number of observations can be made:

- Do users experience different execution times in different version of the application?

Test-Coverage Monitoring
Monitoring Unit Test Performance
------------------------
To enable Travis to run your unit tests and send the results to the Dashboard, two steps have to be taken:
In addition to monitoring the performance of a live deployed version of some web service,
the performance of such a web service can also be monitored by making use of its unit tests.
This of course assumes that several unit tests were written for the web service project it concerns.
Also, since this monitoring should be done in an automated way, a Travis setup for the project is a prerequisite.

1. The installation requirement for the Dashboard has to be added to the `setup.py` file of your app:
To enable Travis to run your unit tests and send the obtained results to the Dashboard, two steps have to be taken:

.. code-block:: python
1. In the `setup.py` file of your web service, the Dashboard has to be added as a requirement:

dependency_links=["https://github.com/flask-dashboard/Flask-MonitoringDashboard/tarball/master#egg=flask_monitoringdashboard"]
.. code-block:: python
install_requires=('flask_monitoringdashboard')
2. In your `.travis.yml` file, one script command should be added:
2. In the `.travis.yml` file, a script command has to be added:

.. code-block:: bash
Expand All @@ -134,10 +137,18 @@ To enable Travis to run your unit tests and send the results to the Dashboard, t
--times=5 \
--url=https://yourdomain.org/dashboard
The `test_folder` argument specifies where the performance collection process can find the unit tests to use.
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
The `url` argument (optional) specifies where the Dashboard is that needs to receive the performance results.
When the last argument is omitted, the performance testing will run, but without publishing the results.
The `test_folder` argument (optional, default: ./) specifies where the performance collection process can find
the unit tests to use. When omitted, the current working directory is used.
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
The `url` argument (optional) specifies where the Dashboard is that needs to receive the performance results.
When the last argument is omitted, the performance testing will run, but without publishing the results.

Now Travis will monitor the performance of the unit tests automatically after every commit that is made.
These results will then show up in the Dashboard, under 'Testmonitor'.
Here, all tests that have been run will show up, along with the endpoints of the web service that they test.
Visualizations of the performance evolution of the unit tests are also available here.
This will give the developer of the web service insight in the expected performance change when the new version of the
web service should be deployed.

Outliers
--------
Expand Down
26 changes: 20 additions & 6 deletions flask_monitoringdashboard/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,18 +48,32 @@ def bind(app):

import os
# Only initialize unit test logging when running on Travis.
if '/home/travis/build/' in os.getcwd():
print('Detected running on Travis.')
if 'TRAVIS' in os.environ:
import datetime
from flask import request

@user_app.before_first_request
def log_current_version():
"""
Logs the version of the user app that is currently being tested.
:return:
"""
home = os.path.expanduser("~")
with open(home + '/app_version.log', 'w') as log:
log.write(config.version)

@user_app.after_request
def after_request(response):
def log_endpoint_hit(response):
"""
Add log_endpoint_hit as after_request function that logs the endpoint hits.
:param response: the response object that the actual endpoint returns
:return: the unchanged response of the original endpoint
"""
hit_time_stamp = str(datetime.datetime.utcnow())
home = os.path.expanduser("~")
log = open(home + '/endpoint_hits.log', 'a')
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))
log.close()
with open(home + '/endpoint_hits.log', 'a') as log:
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))

return response

# Add all route-functions to the blueprint
Expand Down
66 changes: 42 additions & 24 deletions flask_monitoringdashboard/collect_performance.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,36 @@
import argparse
import csv
import os
import datetime
import os
import time
from unittest import TestLoader

import requests

# Parsing the arguments.
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
parser.add_argument('--test_folder', dest='test_folder', required=True,
help='folder in which the unit tests can be found (example: ./tests)')
parser.add_argument('--times', dest='times', default=5,
help='number of times to execute every unit test (default: 5)')
parser.add_argument('--url', dest='url', default=None,
help='url of the Dashboard to submit the performance results to')
args = parser.parse_args()
# Determine if this script was called normally or if the call was part of a unit test on Travis.
# When unit testing, only run one dummy test from the testmonitor folder and submit to a dummy url.
test_folder = os.getcwd() + '/flask_monitoringdashboard/test/views/testmonitor'
times = '1'
url = 'https://httpbin.org/post'
if 'flask-dashboard/Flask-MonitoringDashboard' not in os.getenv('TRAVIS_BUILD_DIR'):
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
parser.add_argument('--test_folder', dest='test_folder', default='./',
help='folder in which the unit tests can be found (default: ./)')
parser.add_argument('--times', dest='times', default=5,
help='number of times to execute every unit test (default: 5)')
parser.add_argument('--url', dest='url', default=None,
help='url of the Dashboard to submit the performance results to')
args = parser.parse_args()
test_folder = args.test_folder
times = args.times
url = args.url

# Show the settings with which this script will run.
print('Starting the collection of performance results with the following settings:')
print(' - folder containing unit tests: ', args.test_folder)
print(' - number of times to run tests: ', args.times)
print(' - url to submit the results to: ', args.url)
if not args.url:
print(' - folder containing unit tests: ', test_folder)
print(' - number of times to run tests: ', times)
print(' - url to submit the results to: ', url)
if not url:
print('The performance results will not be submitted.')

# Initialize result dictionary and logs.
Expand All @@ -34,8 +44,8 @@

# Find the tests and execute them the specified number of times.
# Add the performance results to the result dictionary.
suites = TestLoader().discover(args.test_folder, pattern="*test*.py")
for iteration in range(int(args.times)):
suites = TestLoader().discover(test_folder, pattern="*test*.py")
for iteration in range(int(times)):
for suite in suites:
for case in suite:
for test in case:
Expand All @@ -49,7 +59,7 @@
execution_time = (time_after - time_before) * 1000
data['test_runs'].append(
{'name': str(test), 'exec_time': execution_time, 'time': str(datetime.datetime.utcnow()),
'successful': test_result.wasSuccessful(), 'iter': iteration + 1})
'successful': (test_result.wasSuccessful() if test_result else False), 'iter': iteration + 1})
log.close()

# Read and parse the log containing the test runs into an array for processing.
Expand Down Expand Up @@ -78,14 +88,22 @@
data['grouped_tests'].append({'endpoint': endpoint_hit[1], 'test_name': test_run[2]})
break

# Retrieve the current version of the user app that is being tested.
with open(home + '/app_version.log', 'r') as log:
data['app_version'] = log.read()

# Add the current Travis Build Job number.
data['travis_job'] = os.getenv('TRAVIS_JOB_NUMBER')

# Send test results and endpoint_name/test_name combinations to the Dashboard if specified.
if args.url:
if args.url[-1] == '/':
args.url += 'submit-test-results'
else:
args.url += '/submit-test-results'
if url:
if 'flask-dashboard/Flask-MonitoringDashboard' not in os.getenv('TRAVIS_BUILD_DIR'):
if url[-1] == '/':
url += 'submit-test-results'
else:
url += '/submit-test-results'
try:
requests.post(args.url, json=data)
print('Sent unit test results to the Dashboard at ', args.url)
requests.post(url, json=data)
print('Sent unit test results to the Dashboard at', url)
except Exception as e:
print('Sending unit test results to the dashboard failed:\n{}'.format(e))
5 changes: 0 additions & 5 deletions flask_monitoringdashboard/core/forms/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,3 @@ class Login(FlaskForm):
name = StringField('Username', [validators.data_required()])
password = PasswordField('Password', [validators.data_required()])
submit = SubmitField('Login')


class RunTests(FlaskForm):
""" Used for serving a login form on /{{ link }}/testmonitor. """
submit = SubmitField('Run selected tests')
2 changes: 2 additions & 0 deletions flask_monitoringdashboard/core/plot/plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ def boxplot(values, **kwargs):
"""
if 'name' in kwargs.keys():
kwargs = add_default_value('marker', {'color': get_color(kwargs.get('name', ''))}, **kwargs)
if 'label' in kwargs.keys():
kwargs = add_default_value('name', kwargs.get('label', ''))
kwargs = add_default_value('x', value=values, **kwargs)
return go.Box(**kwargs)

Expand Down
5 changes: 3 additions & 2 deletions flask_monitoringdashboard/core/rules.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
def get_rules():
def get_rules(end=None):
"""
:param end: if specified, only return the available rules to that endpoint
:return: A list of the current rules in the attached Flask app
"""
from flask_monitoringdashboard import config, user_app

rules = user_app.url_map.iter_rules()
rules = user_app.url_map.iter_rules(endpoint=end)
return [r for r in rules if not r.rule.startswith('/' + config.link)
and not r.rule.startswith('/static-' + config.link)]
3 changes: 3 additions & 0 deletions flask_monitoringdashboard/core/utils.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import ast

import numpy as np
from flask import url_for
from werkzeug.routing import BuildError

from flask_monitoringdashboard import config
from flask_monitoringdashboard.core.rules import get_rules
from flask_monitoringdashboard.database.count import count_requests, count_total_requests
from flask_monitoringdashboard.database.endpoint import get_monitor_rule
from flask_monitoringdashboard.database.function_calls import get_date_of_first_request
Expand All @@ -13,6 +15,7 @@ def get_endpoint_details(db_session, endpoint):
""" Return details about an endpoint"""
return {
'endpoint': endpoint,
'rules': [r.rule for r in get_rules(endpoint)],
'rule': get_monitor_rule(db_session, endpoint),
'url': get_url(endpoint),
'total_hits': count_requests(db_session, endpoint)
Expand Down
51 changes: 19 additions & 32 deletions flask_monitoringdashboard/database/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,36 +29,6 @@ class MonitorRule(Base):
last_accessed = Column(DateTime)


class Tests(Base):
""" Table for storing which tests to run. """
__tablename__ = 'tests'
# name must be unique and acts as a primary key
name = Column(String(250), primary_key=True)
# boolean to determine whether the test should be run
run = Column(Boolean, default=True)
# the timestamp of the last time the test was run
lastRun = Column(DateTime)
# whether the test succeeded
succeeded = Column(Boolean)


class TestRun(Base):
""" Table for storing test results. """
__tablename__ = 'testRun'
# name of executed test
name = Column(String(250), primary_key=True)
# execution_time in ms
execution_time = Column(Float, primary_key=True)
# time of adding the result to the database
time = Column(DateTime, primary_key=True)
# version of the website at the moment of adding the result to the database
version = Column(String(100), nullable=False)
# number of the test suite execution
suite = Column(Integer)
# number describing the i-th run of the test within the suite
run = Column(Integer)


class FunctionCall(Base):
""" Table for storing measurements of function calls. """
__tablename__ = 'functionCalls'
Expand Down Expand Up @@ -105,8 +75,25 @@ class Outlier(Base):
time = Column(DateTime)


class TestRun(Base):
""" Stores unit test performance results obtained from Travis. """
__tablename__ = 'testRun'
# name of executed test
name = Column(String(250), primary_key=True)
# execution_time in ms
execution_time = Column(Float, primary_key=True)
# time of adding the result to the database
time = Column(DateTime, primary_key=True)
# version of the user app that was tested
version = Column(String(100), nullable=False)
# number of the test suite execution
suite = Column(Integer)
# number describing the i-th run of the test within the suite
run = Column(Integer)


class TestsGrouped(Base):
""" Table for storing grouped tests on endpoints. """
""" Stores which endpoints are tested by which unit tests. """
__tablename__ = 'testsGrouped'
# Name of the endpoint
endpoint = Column(String(250), primary_key=True)
Expand Down Expand Up @@ -144,4 +131,4 @@ def session_scope():


def get_tables():
return [MonitorRule, Tests, TestRun, FunctionCall, Outlier, TestsGrouped]
return [MonitorRule, FunctionCall, Outlier, TestRun, TestsGrouped]
9 changes: 8 additions & 1 deletion flask_monitoringdashboard/database/count.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from sqlalchemy import func, distinct

from flask_monitoringdashboard.database import FunctionCall, Outlier
from flask_monitoringdashboard.database import FunctionCall, Outlier, TestRun


def count_rows(db_session, column, *criterion):
Expand Down Expand Up @@ -39,6 +39,13 @@ def count_versions(db_session):
return count_rows(db_session, FunctionCall.version)


def count_builds(db_session):
"""
:return: The number of Travis builds that are available
"""
return count_rows(db_session, TestRun.suite)


def count_versions_end(db_session, endpoint):
"""
:param endpoint: filter on this endpoint
Expand Down
30 changes: 28 additions & 2 deletions flask_monitoringdashboard/database/count_group.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,19 @@
from sqlalchemy import func

from flask_monitoringdashboard.core.timezone import to_utc_datetime
from flask_monitoringdashboard.database import FunctionCall
from flask_monitoringdashboard.database import FunctionCall, TestRun, TestsGrouped


def get_latest_test_version(db_session):
"""
Retrieves the latest version of the user app that was tested.
:param db_session: session for the database
:return: latest test version
"""
latest_time = db_session.query(func.max(TestRun.time)).one()[0]
if latest_time:
return db_session.query(TestRun.version).filter(TestRun.time == latest_time).one()[0]
return None


def count_rows_group(db_session, column, *criterion):
Expand All @@ -14,7 +26,7 @@ def count_rows_group(db_session, column, *criterion):
:param criterion: where-clause of the query
:return: list with the number of rows per endpoint
"""
return db_session.query(FunctionCall.endpoint, func.count(column)).\
return db_session.query(FunctionCall.endpoint, func.count(column)). \
filter(*criterion).group_by(FunctionCall.endpoint).all()


Expand All @@ -39,6 +51,20 @@ def count_requests_group(db_session, *where):
return count_rows_group(db_session, FunctionCall.id, *where)


def count_times_tested(db_session, *where):
""" Return the number of tests for an endpoint (possibly with more filter arguments).
:param db_session: session for the database
:param where: additional arguments
"""
result = {}
test_endpoint_groups = db_session.query(TestsGrouped).all()
for group in test_endpoint_groups:
times = db_session.query(func.count(TestRun.name)).filter(TestRun.name == group.test_name).\
filter(*where).one()[0]
result[group.endpoint] = result.get(group.endpoint, 0) + int(times)
return result.items()


def count_requests_per_day(db_session, list_of_days):
""" Return the number of hits for all endpoints per day.
:param db_session: session for the database
Expand Down

0 comments on commit 67d91b3

Please sign in to comment.