Skip to content

Commit

Permalink
Merge pull request #88 from SlashRoot/master
Browse files Browse the repository at this point in the history
0.9.0 Release Candidate
  • Loading branch information
vepkenez committed Aug 14, 2015
2 parents 1fb271e + cf1d1a8 commit 9766d14
Show file tree
Hide file tree
Showing 50 changed files with 794 additions and 458 deletions.
1 change: 1 addition & 0 deletions .gitignore
Expand Up @@ -14,3 +14,4 @@ _trial_temp*
ignore*
*.log
*.log.*
.idea
8 changes: 4 additions & 4 deletions .travis.yml
Expand Up @@ -2,12 +2,12 @@ language: python
python:
- "2.7"
install:
- pip install -r requirements
- pip install -r test-requirements
- pip install -r requirements.txt
- pip install -r requirements_dev.txt
notifications:
email:
recipients:
- info@slashrootcafe.com
- justin@justinholmes.com
on_success: change
on_failure: always
irc:
Expand All @@ -18,4 +18,4 @@ notifications:
- "Build details: %{build_url}"
- "Result: %{result}"
use_notice: true
script: python hendrix/runtests.py
script: python runtests.py
2 changes: 2 additions & 0 deletions README.md
Expand Up @@ -2,6 +2,8 @@

![hendrix](docs/_static/hendrix-logo.png)

[![Build Status](https://travis-ci.org/SlashRoot/hendrix.png?branch=master)](https://travis-ci.org/SlashRoot/hendrix)

Providing web culture APIs to do the awesome of Twisted.

## Overview
Expand Down
53 changes: 53 additions & 0 deletions docs/caching.md
@@ -0,0 +1,53 @@
##Caching

At the moment a caching server is deployed by default on port 8000. Is serves
gzipped content, which is pretty cool - right?

#####How it works

The Hendrix cache server is a reverse proxy that sits in front your Django app.
However, if you wanted to switch away from the cache server you can always point
to the http port (default 8080).

It works by forwarding requests to the http server running the app and
caches the response depending on the availability of `max-age` [seconds] in a
`Cache-Control` header.

#####Busting the cache

Note that you can bust the cache (meaning force it not to cache) by passing
a query in your GET request e.g. `http://somesite.com/my/resource?somevar=test`.
You can also force the query to cache by specifying `cache=true` in the query
e.g. `http://somesite.com/my/resource?somevar=test,cache=true` (so long as a
`max-age` is specified for the handling view).
What this means is that you can let the browser do some or none of the js/css
caching if you so want.

#####Caching in Django

In your project view modules use the `cache_control` decorator to add
a `max-age` of your choosing. e.g.

```python
from django.views.decorators.cache import cache_control

@cache_control(max_age=60*60*24) # cache it for a day
def homePageView(request):
...
```

and that's it! Hendrix will do the rest. Django docs examples [here](https://docs.djangoproject.com/en/dev/topics/cache/#controlling-cache-using-other-headers)

#####Turning cache off

You can turn caching on by passing the flags `-c` or `--cache`. You can also change which
port you want to use with the `--cache_port` option.

#####Global Vs Local

If you're running multiple process using the `-w` or `--workers` options caching
will be process distributed by default. Meaning there will be a reverse proxy
cache server for each process. However if you want to run the reverse
proxy server on a single process just use the `-g` or `--global_cache` flags.

... here "local" means local to the process.
8 changes: 5 additions & 3 deletions docs/crosstown_traffic.md
@@ -1,3 +1,5 @@
# Crosstown Traffic Decorator

@crosstown_traffic is a decorator that will defer logic inside a view until a specific phase in the response process and optionally run it on a different thread or threads.

For example, let's say we are building a REST endpoint that causes 100 phone calls to be placed via the twilio API.
Expand All @@ -18,7 +20,7 @@ def my_django_view(request):

# 100 phone calls, each on their own thread, but not starting until the response has gone out over the wire

@crosstown_traffic.follow_response()
@crosstown_traffic()
def place_100_phone_calls():
phone_call_logic()

Expand All @@ -34,7 +36,7 @@ By default, crosstown_traffic will run the decorated callable on a new thread in
However, if you want to run it on the same thread (and thus block that thread from handling other requests until the callable is finished), you can use the same_thread kwarg:

```python
@crosstown_traffic.follow_response(same_thread=True)
@crosstown_traffic(same_thread=True)
def thing_that_will_happen_after_response_on_same_thread():
hopefully_short_thing()
```
Expand All @@ -45,7 +47,7 @@ By default, if your app responds with a 5xx or 4xx status code (a Server Error o
However, if you want to change these "no_go_status_codes" per callable, you can do so:

```python
@crosstown_traffic.follow_response(no_go_status_codes=['5xx', '400-405', 302])
@crosstown_traffic(no_go_status_codes=['5xx', '400-405', 302])
def long_thing():
time.sleep(10)
print '''
Expand Down
131 changes: 0 additions & 131 deletions docs/features.md

This file was deleted.

6 changes: 1 addition & 5 deletions docs/index.md
Expand Up @@ -14,14 +14,10 @@ Hendrix seeks to add to this discussion by focusing on:

More about the hendrix philosophy [here](philosophy.md).

## Drawbacks

* Because hendrix relies on parts of Twisted that are not compatible with Python 3, hendrix is not yet Python 3-ready for many use cases.
* For many comparable situations - especially the simple synchornous/ blocking scenario, Hendrix likely uses more RAM and CPU than lighter-weight Python web servers.

## Getting started

See the [Quickstart](quickstart.md) or [FAQ](faq.md).
Use the [Development Server](quickstart.md).

## History
It started as a fork of the
Expand Down
30 changes: 30 additions & 0 deletions docs/installation.md
@@ -0,0 +1,30 @@
#Installation

You'll need to use **virtualenv**. If you don't have that set up, follow [the virtualenv instructions here.](http://docs.python-guide.org/en/latest/dev/virtualenvs/)

Inside your python virtuaenv:

```bash
$ pip install hendrix
```

The following python packages are dependencies that should be automaticly installed.

```bash
twisted
txsockjs
zope.interface
watchdog
jinja2
pychalk==0.0.5
service_identity
```

###Extra Setup for SSL

```bash
$ sudo apt-get install build-essential libssl-dev libffi-dev python-dev
$ pip install cryptography
```

For usage details, see [Quickstart](quickstart.md).
27 changes: 27 additions & 0 deletions docs/philosophy.md
Expand Up @@ -10,6 +10,33 @@ Your web app doesn't have to be a WSGI app as we know it, Jim - it can also be a

### Sharing launch logic between all of your environments, from development to production, is pretty good.

It's still sadly normal to use "mange.py runserver" in development and then something completely different in staging and production environments.

Why?!

If you're using hendrix, it's almost certain that you'll use the exact same logic to launch your app in your development environment as you will on a production host - namely, import HendrixDeploy and call it.


### Don't do with a message queue what rightly belongs in a thread or process - and vice-versa.

When a python web project is ready to add even fairly mundane asynchrony, an all-too-common solution is to immediately warp way ahead and install the various components of a message queue solution, slapping Celery and a broker such as RabbiMQ (and sometimes also redis) over the web logic.

For all but the most mature projects, this usually represents the biggest infrastructural change the project will have undertaken to date. You've got your team learning gevent. You're re-thinking the layout of most of your web views. You're using the @task decorator in ways that you aren't sure make sense ("isn't everything a task?" is a question you're hearing in casual conversation).

Even if this process were easy, cheap, and fast - and it's usually none of those - it's simply the wrong solution to the problem.

If you literally want to do more than one thing at the same time in Python, the proper ways to do so are with a separate thread (which will split resources away from the current process) or with a separate process (which can occur on the same server resource or a different one).

Message queues are a sweet technology and they have a place. When you want to maintain a replicable, introspective list of outstanding jobs, modeled and managed in the shape and character of a "to do" list, nothing beats a message queue.

If you follow the orthodoxy described above, then by the time you actually *need* a message queue, your message queue solution will already be polluted by features that were better served by true asynchrony.

And in the near term, when what you want is to literally do more than one thing at the same time, a message queue adds unnecessary complexity, a bunch of moving parts, and obscure performance tuning.

A better first foray into asynchrony is the [crosstown_traffic](crosstown_traffic.md) API. Give it a look and see if you still want to rush to install Celery.

### Drawbacks to hendrix

* Because hendrix relies on parts of Twisted that are not compatible with Python 3, hendrix is not yet Python 3-ready for many use cases.
* For many comparable situations - especially the simple synchornous/ blocking scenario, Hendrix likely uses more RAM and CPU than lighter-weight Python web servers such as uWSGI.

0 comments on commit 9766d14

Please sign in to comment.