Skip to content
Browse files

pickle protocol support doc cleanup

clean up docs after adding pickle protocol support
  • Loading branch information...
1 parent 7ba4583 commit a05e4b13b16ed5621bd3b484b0c879fa17224bd4 @pandemicsyn committed Jul 15, 2012
Showing with 53 additions and 38 deletions.
  1. +32 −27 README.md
  2. +20 −10 etc/statsdpy/statsdpy.conf-sample
  3. +1 −1 setup.py
View
59 README.md
@@ -1,38 +1,44 @@
# statsdpy #
-Simple event based [statsd](http://github.com/etsy/statsd) implementation written in python using [eventlet](http://eventlet.net).
-Its a work in progress but the basics are there.
+Simple event based [statsd](http://github.com/etsy/statsd) implementation written in python using [eventlet](http://eventlet.net) with support for reporting to graphite using the pickle protocol.
### Configuration ###
-statsdpy sample config:
+statsdpy sample config with default options:
[main]
- #graphite_host = 127.0.0.1
- #graphite_port = 2003
- #graphite_pickle_port = 2004
- #listen_addr = 127.0.0.1
- #listen_port = 8125
- #If you track a large number of metrics you should you should report using
- #graphites pickle protocol. In that case switch this to yes to enable it.
- #pickle_protocol = no
- #max number of metrics to report in one go when using the pickle protocol.
- #pickle_batch_size = 300
- #debug = no
- #flush_interval = 10
- #percent_threshold = 90
-
- - Edit the config file to adjust your to your environment.
+ #graphite host and port to connect too
+ graphite_host = 127.0.0.1
+ graphite_port = 2003
+ graphite_pickle_port = 2004
+
+ #address and port we should listen for udp packets on
+ listen_addr = 127.0.0.1
+ listen_port = 8125
+
+ #If you track a large number of metrics you can use the pickle protocol
+ pickle_protocol = no
+ #max number of metrics to report in one go when using the pickle protocol
+ pickle_batch_size = 300
+
+ #enabling debug mode and running in the foreground (with -f) is a great way
+ #to debug an app generating new metrics
+ debug = no
+
+ #How often to flush stats to graphite
+ flush_interval = 10
+ #calculate the XXth percentile
+ percent_threshold = 90
+
+ - Edit the config file appropriately for your environment
- Start the service: `statsdpy-server start --conf=/path/to/your.conf`
- Fire some udp counter, timer, or gauge events at statsdpy
- Check syslog for any errors starting up or processing events
- Profit!
-Its important to note that statsdpy runs in debug mode by default (at least for now). So if you wont be running it in the foreground with the `-f|--foreground` flag you might wanna set `debug = false` in your config. However, running with debug enabled and in the foreground makes it very handy for debuging new statsd clients/events (just dont point it at a valid graphite host).
-
### Reporting using the pickle protocol ###
-If you track a decent # of metrics you may wish to switch to reporting using Graphites [pickle protocol](http://graphite.readthedocs.org/en/latest/feeding-carbon.html#the-pickle-protocol). The pickle protocol is a much more efficient take on the plaintext protocol, and supports sending batches of metrics to Carbon in one go. To enable it just set ``pickle_protocol`` to "yes" in your statsdpy.conf. Optionally, you can also adjust the max number of items per batch that is reported by adjusting the ``pickle_batch_size`` conf option.
+If you track a decent # of metrics you wanna switch to report to graphite using the [pickle protocol](http://graphite.readthedocs.org/en/latest/feeding-carbon.html#the-pickle-protocol). The pickle protocol is more efficient than the the plaintext protocol, and supports sending batches of metrics to carbon in one go. To enable it just set ``pickle_protocol`` to "yes" in your statsdpy.conf. Optionally, you can also adjust the max number of items per batch that is reported by adjusting the ``pickle_batch_size`` conf option.
### Event Types ###
@@ -74,15 +80,14 @@ This counter is being sent sampled every 1/10th of the time.
This counter is being sampled at a 50% rate.
-### Building packages ###
+### Building .deb packages ###
-Clone the version you want and build the package with [stdeb](https://github.com/astraw/stdeb "stdeb") (sudo apt-get install stdeb):
+Clone the repo and build the package with [stdeb](https://github.com/astraw/stdeb "stdeb") (sudo apt-get install stdeb):
- git clone git@github.com:pandemicsyn/statsdpy.git statsdpy-0.0.6
- cd statsdpy-0.0.6
- git checkout 0.0.6
+ git clone git@github.com:pandemicsyn/statsdpy.git
+ cd statsdpy
python setup.py --command-packages=stdeb.command bdist_deb
- dpkg -i deb_dist/python-statsdpy_0.0.6-1_all.deb
+ dpkg -i deb_dist/python-statsdpy_0.0.X-1_all.deb
### Installation via setup.py ###
View
30 etc/statsdpy/statsdpy.conf-sample
@@ -1,14 +1,24 @@
[main]
-#graphite_host = 127.0.0.1
-#graphite_port = 2003
-#graphite_pickle_port = 2004
-#listen_addr = 127.0.0.1
-#listen_port = 8125
+#graphite host and port to connect too
+graphite_host = 127.0.0.1
+graphite_port = 2003
+graphite_pickle_port = 2004
+
+#address and port we should listen for udp packets on
+listen_addr = 127.0.0.1
+listen_port = 8125
+
#If you track a large number of metrics you should you should report using
#graphites pickle protocol. In that case switch this to yes to enable it.
-#pickle_protocol = no
+pickle_protocol = no
#max number of metrics to report in one go when using the pickle protocol.
-#pickle_batch_size = 300
-#debug = no
-#flush_interval = 10
-#percent_threshold = 90
+pickle_batch_size = 300
+
+#enabling debug mode and running in the foreground (with -f) is a great way
+#to debug an app generating new metrics
+debug = no
+
+#How often to flush stats to graphite
+flush_interval = 10
+#calculate the XXth percentile
+percent_threshold = 90
View
2 setup.py
@@ -14,7 +14,7 @@
version = version,
author = "Florian Hines",
author_email = "syn@ronin.io",
- description = "A simple eventlet based statsd clone",
+ description = "Eventlet based statsd clone with pickle protocol support",
license = "Apache License, (2.0)",
keywords = "statsd",
url = "http://github.com/pandemicsyn/statsdpy",

0 comments on commit a05e4b1

Please sign in to comment.
Something went wrong with that request. Please try again.