Skip to content

SASPing is a set of python scripts that emulate user behaviour in order to measure the responsiveness of an environment

Notifications You must be signed in to change notification settings

Boemska/SASPing

Repository files navigation

SASPING

What is this?

SASPING is a testing and reporting framework that simulates user activity in order to validate the status and record the responsiveness of your SAS(tm) installation. It is inspired by Chris Blake's 2016 SAS Global Forum presentation titled What about When It’s Down? An Application for the Enhancement of the SAS® Middle-Tier User Experience.

image

How does it work?

This project is made up of two parts:

  • The Collector is a Python-based data collection script which emulates user activity (including logging on to SAS) in order to measure application responsiveness from a user perspective and log it to a CSV file
  • The Dashboard is a single-page Web Application which lets users view the current responsiveness of applications (there's also a Python script that aggregates the data generated by the first and prepares it to be used by the second... 'The Aggregator')

The tests performed by the first script are defined in a json file (more on that below), and the collector and aggregator scripts should be scheduled to run at regular intervals via cron or similar. The Dashboard is to be served by the SAS Web Server (apache), and the aggregator produces and updates a set of csv files and place them in the same document root as the Dashboard. Those files can then be read in by the dashboard application via AJAX.

Why would I want this?

Other monitoring solutions exist, and this is by no means a replacement for those. However, most monitoring tools tend to look at application health from an IT operations perspective. That monitoring data is very rarely shown to the end users, and quite rightfully so; end users should not need to know how to interpret application performance metrics. Ideally, the applications should just work.

However, users can encounter errors, and when something is not working correctly, some form of explanation to those errors at the point when they occur drastically improves user experience. And that's the purpose of this project: to give end users an instant answer to questions like 'Is it just slow for me', 'Is the application down', 'When are our servers least busy' all without requiring them to pick up the phone or raise a service request.

What do I need to run this?

This project targets Red Hat Enterprise Linux 6 as a minimum, which means Python 2.6. The only non-core requirement is the The Requests library, which can be installed via pip, or downloaded and unzipped into the same directory as the collector script (main.py).

Deployment

Running the Collector script

python main.py -s [settings json file path] -o [output dir] or python main.py --settings=[settings json file path] --output=[output dir]

This command will run the main collector script. It will read a test definition file (see settings.json below), run the tests described in that file, and append data to a file called sasping_data.csv in the specified output directory (second command argument)

Example settings.json:

{
"hostUrl": "https://apps.boemskats.com/",      
  "loginPath": "/SASLogon/login",
  "loginParams": {
    "username": "sasdemo",
    "password": "Orion123"
  },
  "applications": [{
    "name": "Stored Process Web Application",
    "tests": [{
      "id": "id1",
      "type": "stored_process",
      "execPath": "/SASStoredProcess/do",
      "execParams": {
        "_program": "/Apps/startupService"
      },
      "validations": {"mustContain": ["usermessage"], "cantContain": ["ERROR:"]}
    }]
  }]
}
What is a Test

A test is a HTTP request, the results of which are validated with defined Validations. Tests are run at regular intervals and their response time and validation success recorded as data.

What is a Validation

A validation is a Python regular expression that the result of a test request either must match, or must not match, in order for that test to be successful. A test can be constructed using any number of either type of validation. If a test fails, the validation that it failed on will be recorded in the test results.

What is an Application

An Application, in this context, is a logical grouping of tests. An application's relative response time is the average response time of all of its tests.

Running the Aggregator script

The Aggregator script can be run as follows

python aggregate.py -i [master csv file] -m [number of max data points]

or

python aggregate.py --input=[master csv file] --maximum=[number of max data points]

Aggregator will shrink the data if necessary and break it down to smaller files suitable for rendering on slower browsers by the Dashboard application. The second argument is the number of maximum data points to be rendered by the graph which the user sees. Aggregator will combine multiple collector runs into one, and reduce the number of data points by a number of two until it has the highest number of possible data points that is fewer than the second argument.

Building the Report Dashboard

Run npm run release from report/ directory to build the web application. This will create report/dist folder which should be copied to web accessible destination.

Add cron jobs to run collector and aggregator scripts. Both scripts should create/save files to the web app path.

Development

Building the Report

  • pip install -r requirements.txt
  • cd ./report && npm install
  • npm run dev - build and run (it will watch file changes and build your js and sass)

Testing

Generating Test Data

Open another terminal session in sasping path

  • python test_generator.py -s 1477324439 -i 3600 -m 60000 -t 10 -a 5 -o report/build - generate some dummy data
  • python aggregate.py -i report/build/sasping_data.csv -m 100 - run aggregator to shring the generated data
  • open http://localhost:8000/build/

Create dummy test data

python test_generator.py -s [from unix timestamp] -i [interval in seconds] -m [max runs] -t [tests per run] -a [number of apps] -o [output]

or

python test_generator.py --start=[from unix timestamp] --interval=[interval in seconds] --maximum=[max runs] --tests=[tests per run] --apps=[number of apps] --output=[output]

e.g. python test_generator.py -s 1389970377 -i 3600 -m 6000 -t 10 -a 5 -o report/build - will create 60000 rows test file. First run will be on Fri, 17 Jan 2014, in one hour intervals.

About

SASPing is a set of python scripts that emulate user behaviour in order to measure the responsiveness of an environment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages