Skip to content

Commit

Permalink
updated
Browse files Browse the repository at this point in the history
  • Loading branch information
shyal committed Dec 1, 2016
1 parent 1d6defd commit b95c830
Show file tree
Hide file tree
Showing 22 changed files with 451 additions and 162 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ If/when the service you are testing against changes its API, then you can simply
HoverPy works great with the following HTTP clients:

- requests
- urllib2
- urllib3
- TBD

### License
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.1.9
0.1.10
2 changes: 2 additions & 0 deletions docs/source/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ Support
HoverPy works great with the following HTTP clients:

- requests
- urllib2
- urllib3
- TBD

License
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Import hoverpy's main class: HoverPy
from hoverpy import HoverPy
Import requests and random for http
Import requests for http

.. code:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,9 @@
# built documents.
#
# The short X.Y version.
version = '0.1.9'
version = '0.1.10'
# The full version, including alpha/beta/rc tags.
release = '0.1.9'
release = '0.1.10'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
3 changes: 2 additions & 1 deletion docs/source/unittesting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ Instead of inheriting off ``unittest.TestCase`` let's inherit off
import requests
limit = 50
sites = requests.get(
"http://readthedocs.org/api/v1/project/?limit=%d&offset=0&format=json" % limit)
"http://readthedocs.org/api/v1/project/?"
"limit=%d&offset=0&format=json" % limit)
objects = sites.json()['objects']
links = ["http://readthedocs.org" + x['resource_uri'] for x in objects]
self.assertTrue(len(links) == limit)
Expand Down
55 changes: 55 additions & 0 deletions docs/source/urllib2eg.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
.. urllib2
urllib2
********


Import hoverpy's main class: HoverPy

.. code:: python
from hoverpy import HoverPy
Create our HoverPy object in capture mode

.. code:: python
hp = HoverPy(capture=True)
Import urllib2 for http

.. code:: python
import urllib2
Build our proxy handler for urllib2. This is currently a rather crude
method of initialising urllib2, and this code will be incorporated into
the main library shortly.

.. code:: python
proxy = urllib2.ProxyHandler({'http': 'localhost:8500'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
Print the json from our get request. Hoverpy acted as a proxy: it made
the request on our behalf, captured it, and returned it to us.

.. code:: python
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())
Switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all
it does from now on is replay the captured data.

.. code:: python
hp.simulate()
Print the json from our get request. This time the data comes from the
store.

.. code:: python
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())
46 changes: 46 additions & 0 deletions docs/source/urllib3eg.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
.. urllib3
urllib3
********


Import hoverpy's main class: HoverPy

.. code:: python
from hoverpy import HoverPy
Create our HoverPy object in capture mode

.. code:: python
hp = HoverPy(capture=True)
Import urllib3 for http, and build a proxy manager

.. code:: python
import urllib3
http = urllib3.proxy_from_url("http://localhost:8500/")
Print the json from our get request. Hoverpy acted as a proxy: it made
the request on our behalf, captured it, and returned it to us.

.. code:: python
print(http.request('GET', 'http://ip.jsontest.com/myip').data)
Switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all
it does from now on is replay the captured data.

.. code:: python
hp.simulate()
Print the json from our get request. This time the data comes from the
store.

.. code:: python
print(http.request('GET', 'http://ip.jsontest.com/myip').data)
10 changes: 9 additions & 1 deletion docs/source/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,12 @@ You may have noticed this created a ``requests.db`` inside your current director

----------------------------------------

.. include:: modify.rst
.. include:: modify.rst

----------------------------------------

.. include:: urllib2eg.rst

----------------------------------------

.. include:: urllib3eg.rst
2 changes: 1 addition & 1 deletion examples/basic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ from hoverpy import HoverPy

```

Import requests and random for http
Import requests for http

```python
import requests
Expand Down
2 changes: 1 addition & 1 deletion examples/basic/basic.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# import hoverpy's main class: HoverPy
from hoverpy import HoverPy

# import requests and random for http
# import requests for http
import requests

# create our HoverPy object in capture mode
Expand Down
3 changes: 2 additions & 1 deletion examples/unittesting/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ class TestRTD(hoverpy.TestCase):
import requests
limit = 50
sites = requests.get(
"http://readthedocs.org/api/v1/project/?limit=%d&offset=0&format=json" % limit)
"http://readthedocs.org/api/v1/project/?"
"limit=%d&offset=0&format=json" % limit)
objects = sites.json()['objects']
links = ["http://readthedocs.org" + x['resource_uri'] for x in objects]
self.assertTrue(len(links) == limit)
Expand Down
3 changes: 2 additions & 1 deletion examples/unittesting/unittesting.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ def test_rtd_links(self):
import requests
limit = 50
sites = requests.get(
"http://readthedocs.org/api/v1/project/?limit=%d&offset=0&format=json" % limit)
"http://readthedocs.org/api/v1/project/?"
"limit=%d&offset=0&format=json" % limit)
objects = sites.json()['objects']
links = ["http://readthedocs.org" + x['resource_uri'] for x in objects]
self.assertTrue(len(links) == limit)
Expand Down
51 changes: 51 additions & 0 deletions examples/urllib2eg/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Import hoverpy's main class: HoverPy

```python
from hoverpy import HoverPy

```

Create our HoverPy object in capture mode

```python
hp = HoverPy(capture=True)

```

Import urllib2 for http

```python
import urllib2

```

Build our proxy handler for urllib2. This is currently a rather crude method of initialising urllib2, and this code will be incorporated into the main library shortly.

```python
proxy = urllib2.ProxyHandler({'http': 'localhost:8500'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)

```

Print the json from our get request. Hoverpy acted as a proxy: it made the request on our behalf, captured it, and returned it to us.

```python
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())

```

Switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all it does from now on is replay the captured data.

```python
hp.simulate()

```

Print the json from our get request. This time the data comes from the store.

```python
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())

```

26 changes: 26 additions & 0 deletions examples/urllib2eg/urllib2eg.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# import hoverpy's main class: HoverPy
from hoverpy import HoverPy

# create our HoverPy object in capture mode
hp = HoverPy(capture=True)

# import urllib2 for http
import urllib2

# build our proxy handler for urllib2. This is currently a rather crude
# method of initialising urllib2, and this code will be incorporated into
# the main library shortly.
proxy = urllib2.ProxyHandler({'http': 'localhost:8500'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)

# print the json from our get request. Hoverpy acted as a proxy: it made
# the request on our behalf, captured it, and returned it to us.
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())

# switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all
# it does from now on is replay the captured data.
hp.simulate()

# print the json from our get request. This time the data comes from the store.
print(urllib2.urlopen("http://ip.jsontest.com/myip").read())
43 changes: 43 additions & 0 deletions examples/urllib3eg/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
Import hoverpy's main class: HoverPy

```python
from hoverpy import HoverPy

```

Create our HoverPy object in capture mode

```python
hp = HoverPy(capture=True)

```

Import urllib3 for http, and build a proxy manager

```python
import urllib3
http = urllib3.proxy_from_url("http://localhost:8500/")

```

Print the json from our get request. Hoverpy acted as a proxy: it made the request on our behalf, captured it, and returned it to us.

```python
print(http.request('GET', 'http://ip.jsontest.com/myip').data)

```

Switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all it does from now on is replay the captured data.

```python
hp.simulate()

```

Print the json from our get request. This time the data comes from the store.

```python
print(http.request('GET', 'http://ip.jsontest.com/myip').data)

```

20 changes: 20 additions & 0 deletions examples/urllib3eg/urllib3eg.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# import hoverpy's main class: HoverPy
from hoverpy import HoverPy

# create our HoverPy object in capture mode
hp = HoverPy(capture=True)

# import urllib3 for http, and build a proxy manager
import urllib3
http = urllib3.proxy_from_url("http://localhost:8500/")

# print the json from our get request. Hoverpy acted as a proxy: it made
# the request on our behalf, captured it, and returned it to us.
print(http.request('GET', 'http://ip.jsontest.com/myip').data)

# switch HoverPy to simulate mode. HoverPy no longer acts as a proxy; all
# it does from now on is replay the captured data.
hp.simulate()

# print the json from our get request. This time the data comes from the store.
print(http.request('GET', 'http://ip.jsontest.com/myip').data)

0 comments on commit b95c830

Please sign in to comment.