Skip to content

Commit

Permalink
Quite a few tweaks to the README :) We're getting there!
Browse files Browse the repository at this point in the history
  • Loading branch information
Espenhh committed Apr 18, 2012
1 parent c8d8bed commit 0e6ad28
Showing 1 changed file with 31 additions and 26 deletions.
57 changes: 31 additions & 26 deletions README.md
Expand Up @@ -10,8 +10,9 @@ You can get [a copy of the slides here](http://kvalle.github.com/grinder-worksho

# Preparation before the workshop

**TL;DR:** `git clone git://github.com/kvalle/grinder-workshop.git; cd grinder-workshop; ./startAgent.sh example/scenario.properties`
**TL;DR:** run this command `git clone git://github.com/kvalle/grinder-workshop.git; cd grinder-workshop; ./startAgent.sh example/scenario.properties`

**A bit more detailed:**
To save us all some time, it would be nice if everyone could do some preparation before the workshop begins. That let's us begin working on the tasks immediately, and gives us much more time to learn and help each other.

What you need to do is:
Expand Down Expand Up @@ -64,7 +65,7 @@ There is also a [script API](http://grinder.sourceforge.net/g3/script-javadoc/in
The first place to check if you have any questions about the language: [Pythons official documentation](http://docs.python.org/index.html).
It contains a smorgasbord of good information.
A good way to navigate this is through the [search site](http://docs.python.org/search.html).
For any question that on the interaction between Python and Java, the [Jython home page](http://www.jython.org/docs/index.html) is the place to start.
For any question regarding the interaction between Python and Java, the [Jython home page](http://www.jython.org/docs/index.html) is the place to start.

---------------

Expand All @@ -86,7 +87,7 @@ Look at these if you wish, but only as a last resort if you are completely stuck
In this first task, we will be writing a test to GET a single URL and measure the response time.

We have prepared some of the code, to help you get started.
First, the test configuration is ready made in `tests/1.properties`.
First, the test configuration is allready made for you in `tests/1.properties`.
This file points to the test script file, in `tests/scripts/test1.py`.
Here we have only provided the shell, which you must complete to make the test do anything useful.

Expand All @@ -102,9 +103,9 @@ In the test script, there are three things you need to do:

1. Create a `Test` with identification number (could be anything) and a description.
1. Wrap an instance of `HTTPRequest` with the test you just created.
1. Do a GET request every every time the test is run (i.e. from the `__call__`-method).
1. Do a GET request every time the test runs (i.e. from the `__call__`-method).
You can GET any page you wish, just please be sure not to accidentally DOS-attack anybody.
A rather harmless alternative is to get [http://grinder.espenhh.com/rocksolid.php](http://grinder.espenhh.com/rocksolid.php).
A rather harmless alternative is to get [http://grinder.espenhh.com/rocksolid.php](http://grinder.espenhh.com/rocksolid.php)

For how to do the actual GET request, have a look around the [script API](http://grinder.sourceforge.net/g3/script-javadoc/index.html).

Expand All @@ -115,7 +116,7 @@ Once done, check out your results stored in the `grinder-workshop/log` folder.
Just like in task 1, we have also here prepared the configuration in `tests/2.properties`, and a shell for you to get started scripting in `tests/scripts/task2.py`.

The file `tests/scripts/urls.txt` contains a number of URLs to a small dummy site.
Your task is to write a scripts that reads `urls.text`, and then GETs each one in turn.
Your task is to write a script that reads `urls.text`, and then does a GET-request against each URL (which is separated by line breaks).
Make sure you use different `Test` objects for each URL, to make Grinder record their response times individually.

### How-to
Expand Down Expand Up @@ -150,11 +151,11 @@ If you complete the task quickly, try one or more of the following:
## Task 3 - Validating the responses

In task 2, you created a test script for timing the responses while GETing a series of URLs.
But sending a request and waiting for some response, does not ensure that you get back is what you expected or wanted.
But just sending a request and waiting for some response does not ensure that you get back is what you expected or wanted.
In this task, we will enhance the script to inspect the responses, and validate them against a set of requirements.
Should not the response fulfil the requirements, we will fail the particular test.
Should the response not fulfill the requirements, we will fail that particular test.

Here we have not made any code for you to start from.
Here we have not prepared any code for you to start from.
Instead, you'll be able to use the results from task 2 as a basis, and expand on that.
In case you did not quite finish the previous task, but still would like to move ahead, make use of the [provided solutions](https://github.com/kvalle/grinder-workshop/tree/master/solutions).

Expand All @@ -166,11 +167,11 @@ You could test that the...

- HTTP status code is 200 (for example)
- response body is larger than some minimum size (in lines, or in bytes)
- response contains (or does not contain) some string of text
- response body contains (or does not contains) some string of text
- HTTP header contains some field

By the way, the URLs in the `urls.txt` file is meant only as a starting point.
Feel free to add or remove URLs as you like.
Feel free to add or remove URLs as you like. Just remember: no DOS attacks! ;)

### How-to

Expand All @@ -190,16 +191,16 @@ You need to know about and use the following:
Use this in the beginning of your script, e.g. in the `__init__` method, in order to be able to manually control the reporting.
* `grinder.statistics.getForLastTest().setSuccess(False)`:
This will mark the last test run as a failure.
The results of each test is automatically set to `sucess==True`, so you don't need to do anything unless you need to register the test as a failure.
By default the results of each test is automatically set to `success==True`, so you don't need to do anything unless you need to register the test as a failure.
* `grinder.statistics.report()`:
This method reports the result of the latest run test.
This method reports the result of the test.
Call it after your checks have determined success or failure.

To verify that your checks are working, fail some tests and have a look at the test results.

### Extras

If you finish quickly and have some extra time on your hands, consider the following.
If you finish quickly and have some extra time on your hands, consider the following:

In most cases, you will have different requirements when testing different URLs.
Some pages should perhaps have different status codes, contain different text, or return different headers.
Expand All @@ -210,21 +211,22 @@ Implement this by adding information about which validation checks to perform al

Up until now, we have tested some pretty static pages. You (might have) parsed the response to do some basic content-check (e.g. to check whether some specific text are present, or if it's not). Now, it's time to do some more fancy parsing.

We'll be testing against an API which returns JSON. This JSON will contain links to further stuff you can test against. These links will change for each request, this means that we'll have to parse the JSON to fetch the links - we can't hard-core all the links in the script beforehand. This task will prepare you for testing real API's out there, either if they have real links or if they just have ID's you have to parse out and include in a predefined URL-template.
We'll be testing against an API which returns JSON-formatted data. This JSON-object will contain links to more stuff you can test against. These links will theoretically change for each request, this means that we'll have to parse the JSON to fetch the links - we can't hard-code all the links in the script beforehand. This task will prepare you for testing real API's out there, either if they have real links or if they just have ID's you have to parse out and include in a predefined URL-template.

The easiest way to start is to do a manual call against the webpage: http://grinder.espenhh.com/json.php
The easiest way to start is to do a manual call against the webpage: http://grinder.espenhh.com/json.php. We have prepared the configuration in tests/4.properties, and a shell for you to get started scripting in tests/scripts/task4.py. This shell contains the code to GET the JSON, and print it out. Therefore, to get startet just run this test:

`./startAgent.sh tests/4.properties`

Take a look at the JSON, and figure out what you want to test. It could be smart to run the JSON through a ["beautifier"](http://jsonformatter.curiousconcept.com/) to be better able to see the structure.

### How-to

Then, start writing the test. We'll give you complete freedom here, but to get you started you can do the following:

1. Start by writing a test that fetches http://grinder.espenhh.com/json.php and outputs the result to the console
2. Now, modify the test to parse the JSON.
3. To start simple, print out the `fetched`-field on the JSON
4. Now loop through all the tweets, and print out the tweets
5. Find the URL for each tweets profile picture, and do a GET against this URL
1. Modify the test to parse the JSON. You have the [org.json-API](http://www.json.org/javadoc/org/json/JSONObject.html) availiable.
2. To start simple, print out the `fetched`-field on the JSON
3. Now loop through all the tweets, and print out the tweets
4. Find the URL for each tweets profile picture, and do a GET against this URL

If you want to continue doing some JSON testing, you can try the real twitter API. And PLEASE: don't load-test this, just run with a single thread and a single run each time! Load-testing other people's servers without permission is BAD BEHAVIOUR ;)

Expand All @@ -239,11 +241,14 @@ Sometimes, you don't want to write all your tests by hand, you just want to simu

Do the following tasks to record a simple web page:

1. Start the proxy server by running the script `./startProxy.sh` .. This will start a simple console that lets you input comments, and stop the proxy cleanly
2. Configure your browser to send traffic through the proxy (read more [here](http://grinder.sourceforge.net/g3/tcpproxy.html) )
3. Go to a simple web page (we recommend starting with http://grinder.espenhh.com/simple/ ). If you go to a complex page, the generated script will be crazy long
4. After the page have loaded in the browser, click "stop" in the simple console window
5. Inspect the script generated: it's located at `proxy/proxygeneratedscript.sh`
1. Start the proxy server by running the script `./startProxy.sh`. This will start a simple console that lets you input comments, and stop the proxy cleanly
2. Configure your browser to send traffic through the proxy. This most likely means `localhost`, port `8001` – the output from the first step will tell you if this is correct (read more about configuring the browser [here](http://grinder.sourceforge.net/g3/tcpproxy.html))
3. Browse to a simple web page (we recommend starting with http://grinder.espenhh.com/simple/ ). If you browse to a complex page, the generated script will be crazy long!
4. After the page has loaded in the browser, click "stop" in the simple console window
5. Inspect the generated script: it's located at `proxy/proxygeneratedscript.sh`
6. Try running the script: `./startAgent.sh proxy/proxygeneratedscript.sh`
7. Check the log, try modifying the script, experiment. You can start by removing all the sleep statements in the script. Then try it on a more complicated page. Have fun =)

---------------

That's all, folks :)

0 comments on commit 0e6ad28

Please sign in to comment.