Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Contribution by Fred Hatfull

  • Loading branch information...
commit a04f2c63e58ed8aa15f0bc1460098a6ca110e578 1 parent e557b84
@fhats fhats authored jolynch committed
View
4 .gitignore
@@ -1,5 +1,5 @@
-
-
+*.yaml
+!config.yaml.example
View
4 .gitmodules
@@ -1,5 +1,5 @@
-
-
+[submodule "firefly/static/d3"]
+ path = firefly/static/d3
View
2  Makefile
@@ -1,5 +1,5 @@
-
+production:
View
110 README.md
@@ -1,3 +1,4 @@
+# Firefly
@@ -5,107 +6,106 @@
+## Features
+* Line, stacked, and area graphs
+* Configure graphs with arbitrary numbers of data sources
+* Configure grids of graphs -- great for creating dashboards of related information
+* View graphs from multiple datacenters in the same dashboard
+* Show historical overlays along with your real-time data
+* Log-scale Y-axis
+* Support for isolated and embedded graphs
+* Native API support for annotations
+## Prerequisites
+Firefly is written in Python and requires Python 2.6 or greater.
+### YAML
+Firefly's configuration is formatted entirely as YAML. YAML is pretty easy to pick up, but you'll still want to be familiar with the [YAML Spec](http://www.yaml.org/spec/1.2/spec.html) if you are not already.
+### Data Sources
+## Getting Started
+To get up and running immediately, make sure you update your submodules: `git submodule update --init`. Then simply run `python -m firefly.main --testing -c firefly.yaml.example` from your Firefly checkout.
+For more help: `python -m firefly.main --help` or take a peek in `firefly.yaml.example`
+_**Note:** Some configuration options can only be specified in your YAML configuration file._
+## Deploying
+### How Firefly Runs in Production
+### Setting Up the Data Servers
+### Setting Up the UI Server
+You will need to specify each data server you set up in the `data_servers` section of the `ui_server` configuration section. Data servers are specified by the URL they can be reached at in the `name` attribute and a description of the data server's environment in the `desc` attribute.
+If you are hosting Firefly on the same machine as other web services or running behind a reverse proxy, you might want to set `url_path_prefix` and `port` to your desired values. By default Firefly runs with a url prefix of `/firefly/` when not in `--testing` mode.
+### Starting Your Firefly Servers
+Simply start each Firefly instance you have in your various environments with the appropriate configuration files:
+`python -m firefly.main -c <configuration file>`
+## Developing
+First, give yourself a base YAML configuration file. `cp firefly.yaml.example firefly.yaml`. Firefly reads from firefly.yaml first and then overrides the values specified in this file with any command line options specified. This is allows you to flip various switches during development which you can then set later in your production config (configuration files specified with `-c` will always override other command line options).
+There are a few configuration options you'll want to fill in before you can start graphing. Firefly is divided into two parts: a **data server** and a **ui server**, each of which has its own set of configuration options. Below are the various options you'll want to set to get started from a fresh checkout.
+### Data Server Configuration
+If you have an accessible Ganglia instance running, you should set the location of the Ganglia RRD socket and storage in the `rrdcached_socket` and `rrdcached_storage` settings of `data_sources.ganglia_rrd.GangliaRRD`:
+ data_source_config:
+ data_sources.ganglia_rrd.GangliaRRD:
+ rrdcached_socket: "/path/to/your/ganglia/rrd/unix/domain/socket.sock"
+ rrdcached_storage: "/path/to/your/ganglia/rrd/storage"
+If you do not have Ganglia running, comment out this data source in the `data_sources` section:
+ data_sources:
+ # - data_sources.ganglia_rrd.GangliaRRD
+ - data_sources.stat_monster_rrd.StatMonsterRRD
+If you want to run with any additional custom data sources, add them to the `data_sources` section and provide the kwargs they will be passed in the `data_source_config` section.
+### Running Firefly
+`python -m firefly.main --testing`
+This starts both a data server and a UI server in the same web server running on the local machine. The UI server is accessible on `localhost:8889` by default. Note that you do not need to have any data servers configured in the UI server for the UI server to know about the local data server running alongside it!
+Firefly will also give you a test data source to use, which will produce a constant sine wave across all time periods.
+## Miscellaneous Configuration
+### Annotations
+Firefly supports annotating graphs with various events that you might be interested in. To add annotations simply send an HTTP `POST` request to the `/add_annotation` endpoint on each of your data servers that the annotation applies to. This endpoint expects four arguments in the `POST` body:
+* `token` - A token obtained from the UI server (`GET http://ui_server/token`).
+* `type` - Specifies the type of annotation. Should be a single word with alphanumeric characters only. This is the text displayed on the graphs next to the annotation marker.
+* `description` - A description of the event this annotation is for.
+* `time` - The time the annotation occurred, as a floating point number of seconds since the epoch.
+### Database Files
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+You can control where the data server and UI server put their SQLite database files with the `db_file` configuration variable, which can be set for both `data_server` and `ui_server`.
View
4 docs/conf.py
@@ -45,9 +45,9 @@
+version = '1.0'
-
-
+release = '1.0'
View
146 firefly.yaml.example
@@ -1,73 +1,73 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+## Firefly example configuration
+##
+## Copy and customize this file in firefly.yaml or in an external location
+## (can be specified with -c, --config at run-time)
+##
+## Most of these options can be overridden on the command line as well.
+## See python -m firefly.main --help for more information
+
+# Whether or not we should omit starting the DATA SERVER
+omit_data_server: false
+# Whether or not we should omit starting the UI SERVER
+omit_ui_server: false
+
+# Secret key used to generate tokens for authentication with the dataserver
+# by the client. Be careful not to check this into your source code!
+secret_key: "DEBUG"
+
+# Options related to the DATA SERVER
+data_server:
+
+ # Port for the DATA SERVER to listen on
+ port: 8890
+
+ # The data sources to collect data from.
+ # These are Python modules contained in data_sources/
+ data_sources:
+ - data_sources.ganglia_rrd.GangliaRRD
+ - data_sources.stat_monster_rrd.StatMonsterRRD
+
+ # Configuration for the selected data sources
+ # Gets passed as kwargs to the DataSource constructor
+ data_source_config:
+ # Both of these datasources rely on an rrdcached to be present so that
+ # they can read entries from RRDs. Set these to something other than
+ # null or firefly will crash.
+ data_sources.ganglia_rrd.GangliaRRD:
+ rrdcached_socket: null
+ rrdcached_storage: null
+ data_sources.stat_monster_rrd.StatMonsterRRD:
+ # Location of the rrdcached socket file
+ rrdcached_socket: null
+ # Location of the rrdcached storage file
+ rrdcached_storage: null
+
+ # The location of the SQLite database file which contains the data store
+ # for the DATA SERVER
+ db_file: "data/data_server.sqlite"
+
+# Options related to the UI SERVER
+ui_server:
+
+ # Port for the UI SERVER to listen on
+ port: 8889
+
+ # Firefly DATA SERVERs to connect to for data.
+ # If --no-data-server is specified, this option should be specified.
+ # Otherwise, if both a DATA SERVER and a UI SERVER are being run,
+ # the local DATA SERVER will be added to this list.
+ # Uncomment this option and replace it with the location of your DATA SERVER
+ # when you are ready to run outside of --testing mode.
+ #data_servers:
+ # - name: "http://some_host:8890/"
+ # desc: that_one_datacenter
+
+ # The SQLite database file to keep UI SERVER data in
+ db_file: "data/ui_server.sqlite"
+
+ # Should we enable Javascript logging?
+ js_logging_enabled: false
+
+ # The URL prefix to use OUTSIDE of --testing mode.
+ # Note that this is just / in --testing mode.
+ url_path_prefix: "/firefly/"
View
364 firefly/data_server.py
@@ -1,13 +1,20 @@
+import datetime
+import re
+import signal
+import socket
+import sqlite3
+import util
+import tornado.httpclient
@@ -18,224 +25,217 @@
+ def new_method(self):
+ token = self.get_argument('token')
+ if not util.verify_access_token(token, self.application.settings['secret_key']):
+ raise tornado.web.HTTPError(403)
+ method(self)
+ return new_method
+ @token_authed
+ def get(self):
+ path = json.loads(self.get_argument('path'))
+ if not path:
+ contents = self._list_sourcelists()
+ else:
+ ds = self.application.settings['data_sources_by_key'][path[0]]
+ contents = ds.list_path(path[1:])
+ self.set_header("Content-Type", "application/json")
+ self.set_header("Cache-Control", "no-cache, must-revalidate")
+ self.set_header("Access-Control-Allow-Origin", "*")
+ self.write(json.dumps(contents))
+ def _list_sourcelists(self):
+ sourcelists = [{
+ 'name': src._FF_KEY,
+ 'type': 'data_source',
+ 'desc': src.DESC,
+ 'children': None} for src in self.application.settings['data_sources']]
+ return sourcelists
+ """Base class implementing common ops"""
+ def get_params(self):
+ sources = json.loads(self.get_argument('sources'))
+ start = int(self.get_argument('start', 0))
+ end = int(self.get_argument('end', 0))
+ zoom = int(self.get_argument('zoom', 0))
+ width = int(self.get_argument('width', 0))
+ height = int(self.get_argument('height', 0))
+ y_axis_log_scale = self.get_argument('y_axis_log_scale', False)
+ y_axis_origin_zero = self.get_argument('y_axis_origin_zero', False)
+ overlay_previous_period = self.get_argument('overlay_previous_period', False)
+ stacked_graph = self.get_argument('stacked_graph', False)
+ area_graph = self.get_argument('area_graph', False)
+ if sources:
+ data_source, sources = parse_sources(sources, self.application.settings['data_sources_by_key'])
+ else:
+ data_source = None
+ return {
+ 'data_source': data_source,
+ 'sources': sources,
+ 'start': start,
+ 'end': end,
+ 'width': width,
+ 'height': height,
+ 'options': {
+ 'zoom': zoom,
+ 'y_axis_log_scale': y_axis_log_scale,
+ 'y_axis_origin_zero': y_axis_origin_zero,
+ 'overlay_previous_period': overlay_previous_period,
+ 'stacked_graph': stacked_graph,
+ 'area_graph': area_graph}}
+ """Handler for json graph data"""
+ @token_authed
+ def get(self):
+ params = self.get_params()
+ data = params['data_source'].data(
+ params['sources'],
+ params['start'],
+ params['end'],
+ params['width'])
+ self.set_header("Content-Type", 'application/json')
+ self.set_header("Cache-Control", "no-cache, must-revalidate")
+ self.set_header("Access-Control-Allow-Origin", "*")
+ self.write(data)
+ """Handler for the legend data for a given graph"""
+ @token_authed
+ def get(self):
+ params = self.get_params()
+ svc = params['data_source'].legend(params['sources'])
+ self.set_header('Content-Type', 'application/json')
+ self.set_header("Cache-Control", "no-cache, must-revalidate")
+ self.set_header("Access-Control-Allow-Origin", "*")
+ self.write(json.dumps({'legend': svc}))
+ """Handler for the title data for a given graph"""
+
+ @token_authed
+ def get(self):
+ params = self.get_params()
+ title = params['data_source'].title(params['sources'])
+
+ self.set_header('Content-Type', 'application/json')
+ self.set_header("Cache-Control", "no-cache, must-revalidate")
+ self.set_header("Access-Control-Allow-Origin", "*")
+ self.write(json.dumps({'title': title}))
+
+class AnnotationsHandler(GraphBaseHandler):
+ """Handler to provide annotations data for a graph"""
+
+ @token_authed
+ def get(self):
+ params = self.get_params()
+
+ cursor = self.settings["db"].cursor()
+
+ annotations_rows = cursor.execute('SELECT type, description, time, id FROM annotations WHERE time >= ? and time <= ?', (params['start'], params['end']))
+
+ self.set_header('Content-Type', 'application/json')
+ self.set_header("Cache-Control", "no-cache, must-revalidate")
+ self.set_header("Access-Control-Allow-Origin", "*")
+
+ keys = [desc[0] for desc in cursor.description]
+ # This funky comprehension(s) associates a key with each value in each row
+ # giving us a nice list of dicts instead of just a list of tuples
+ # should help debugging and clarity on the javascript side
+ annotations = [dict((key, value) for key, value in zip(keys, row)) for row in cursor]
+
+ cursor.close()
+
+ self.write(json.dumps(annotations))
+
+class AddAnnotationHandler(tornado.web.RequestHandler):
+ """Handler to take POSTs to add annotations to the database."""
+
+ TYPE_RE = re.compile('^[A-Za-z0-9]+$')
+ DESCRIPTION_RE = re.compile('^[A-Za-z0-9 \(\)\'\-]+$')
+ @token_authed
+ def post(self):
+ """Given a type, a description, and a time, insert an annotation into the annotations database.
+ Params:
+ type: A string describing the type of annotation this is
+ description: A string with additional details about the event that this annotation represents
+ time: An floating-point number of seconds representing the time at which the event occurred.
+ """
+ an_type, an_desc, an_time = ((self.get_argument(param) for param in ('type', 'description', 'time')))
+ an_time = float(an_time)
+ if not self.TYPE_RE.match(an_type):
+ raise tornado.httpclient.HTTPError(400, "Invalid annotation type specified.")
+ if not self.DESCRIPTION_RE.match(an_desc):
+ raise tornado.httpclient.HTTPError(400, "Invalid annotation description specified.")
+ # Insert this annotation into the DB
+ # Note that SQLite takes care of the sanitation here for us, so this isn't quite as scary as it looks
+ self.settings['db'].execute("INSERT INTO annotations (type, description, time) VALUES (?,?,?)", (an_type, an_desc, an_time))
+ # It's comforting to know things went alright
+ self.write(json.dumps({"status": "ok"}))
+ """Handler for monitoring"""
+ def get(self):
+ self.write('pong\n')
+ data_source_name = sources[0][0]
+ ds = data_sources_by_key[data_source_name]
+ srcs = [source[1:] for source in sources]
+ return ds, srcs
+def initialize_data_server(config, secret_key=None, ioloop=None):
+ if ioloop is None:
+ ioloop = tornado.ioloop.IOLoop.instance()
+ # connect to the database to store annotation in
+ # I kind of hate having the schema for this DB here, but I'm going to leave it for to retain parity with ui_server.py
+ db_conn = sqlite3.connect(config['db_file'], isolation_level=None)
+ db_conn.execute("""
+ create table if not exists annotations (
+ id integer primary key autoincrement,
+ type integer not null,
+ description text not null,
+ time float not null
+ )""")
+ db_conn.execute("create index if not exists time on annotations(time)")
+ config['db'] = db_conn
+ config["secret_key"] = secret_key
+ # init the application instance
+ application = tornado.web.Application([
+ (r"/data", DataHandler),
+ (r"/legend", GraphLegendHandler),
+ (r"/title", GraphTitleHandler),
+ (r"/ping", PingHandler),
+ (r"/annotations", AnnotationsHandler),
+ (r"/add_annotation", AddAnnotationHandler),
+ (r"/sources", SourcesHandler)], **config)
+ # start the main server
+ http_server = tornado.httpserver.HTTPServer(application, io_loop=ioloop)
+ http_server.listen(config["port"])
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ log.info('Firefly data server started on port %d' % config["port"])
View
30 firefly/data_source.py
@@ -1,25 +1,25 @@
+ """Base class for Firefly Data Sources"""
+ DESC = "Base class for Firefly Data Sources"
+ def __init__(self, *args, **kwargs):
+ self.logger = logging.getLogger(__name__)
+ def list_path(self, path):
+ """given an array of path components, list the (presumable) directory"""
+ raise NotImplemented
+ def graph(self):
+ raise NotImplemented
+ def data(self):
+ raise NotImplemented
+ def legend(self):
+ raise NotImplemented
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ def title(self):
+ raise NotImplemented
View
178 firefly/data_sources/ganglia_rrd.py
@@ -6,92 +6,92 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+import firefly.data_source
+
+class GangliaRRD(firefly.data_source.DataSource):
+ """Stats from Ganglia"""
+
+ DESC = "Ganglia Stats"
+
+ def __init__(self, *args, **kwargs):
+ super(GangliaRRD, self).__init__(*args, **kwargs)
+ self.DAEMON_ADDR = kwargs['rrdcached_socket']
+ self.GRAPH_ROOT = kwargs['rrdcached_storage']
+
+ def list_path(self, path):
+ """given an array of path components, list the (presumable) directory"""
+ contents = []
+ root = self.GRAPH_ROOT if not path else os.path.join(self.GRAPH_ROOT, os.path.join(*path))
+ for name in sorted(os.listdir(root)):
+ if os.path.isdir(os.path.join(root, name)):
+ entries = self._form_entries_from_dir(root, name)
+ if entries:
+ contents.extend(entries)
+ else:
+ entries = self._form_entries_from_file(root, name)
+ if entries:
+ contents.extend(entries)
+ return contents
+
+ def _form_entries_from_dir(self, root, name):
+ return [{'type': 'dir', 'name': name, 'children': None}]
+
+ def _form_entries_from_file(self, root, name):
+ return [{'type': 'file', 'name': name[:-4]}]
+
+ def _svc(self, sources):
+ colorstep = 1.0 / len(sources)
+ svc = zip(sources, ("#%s" % ("%02x%02x%02x" % colorsys.hsv_to_rgb(i*colorstep, 1, 255)) for i in xrange(len(sources))))
+ return svc
+
+ def _form_def(self, idx, source):
+ source = "%s.rrd" % '/'.join(source)
+ return "DEF:ds%d=%s/%s:sum:AVERAGE" % (idx, self.GRAPH_ROOT, os.path.join(*source.split('/')))
+
+ def data(self, sources, start, end, width):
+ opts = [
+ "/usr/bin/rrdtool", "xport",
+ "--start", str(start),
+ "--end", str(end),
+ "--maxrows", str(width)]
+
+ conditionals = [
+ # flush rrdcached before making graph
+ (self.DAEMON_ADDR , ['--daemon', self.DAEMON_ADDR])]
+
+ for condition, optlist in conditionals:
+ if condition:
+ opts.extend(optlist)
+
+ defs = []
+ lines = []
+ for idx, source in enumerate(sources):
+ defs.append(self._form_def(idx, source))
+ lines.append("XPORT:ds%d" % idx)
+
+ pipe = subprocess.Popen(opts + defs + lines, stdout=subprocess.PIPE)
+ xport_stdout, xport_stderr = pipe.communicate()
+ if pipe.returncode != 0:
+ raise tornado.web.HTTPError(500, log_message=xport_stderr)
+
+ data = []
+ try:
+ for row in ET.fromstring(xport_stdout).findall("data/row"):
+ time = int(row.findtext("t"))
+ values = []
+ for v in row.findall("v"):
+ value = float(v.text)
+ values.append("%g" % value if not math.isnan(value) else None)
+ if any(values):
+ values_string = ",".join(v if v else "null" for v in values)
+ data.append('{"t":%d,"v":[%s]}' % (time, values_string))
+ except Exception, e:
+ raise tornado.web.HTTPError(500, log_message=str(e))
+
+ return "[%s]" % ",".join(data)
+
+ def legend(self, sources):
+ return self._svc(sources)
+
+ def title(self, sources):
+ return ["ganglia"]
View
100 firefly/data_sources/stat_monster_rrd.py
@@ -9,53 +9,53 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ """Stats from StatMonster"""
+
+ DESC = "StatMonster"
+
+ def __init__(self, *args, **kwargs):
+ super(StatMonsterRRD, self).__init__(*args, **kwargs)
+ self.GRAPH_ROOT = kwargs['rrdcached_storage']
+ self.DAEMON_ADDR = kwargs['rrdcached_socket']
+
+ def _form_entries_from_file(self, root, name):
+ info = rrdtool.info(str(os.path.join(root, name)))
+ dses = set()
+ for entry in info.keys():
+ match = ds_re.match(entry)
+ if match:
+ dses.add(match.group(1))
+
+ return [{'type': 'file', 'name': "%s_%s" % (name[:-4], stat)} for stat in sorted(list(dses))]
+
+ def _form_def(self, idx, src):
+ src_root = src[:-1]
+ try:
+ src_file_basenamea, source_file_basenameb, ds_name = src[-1].split('_', 2)
+ src_file_basename = '_'.join((src_file_basenamea, source_file_basenameb))
+ except:
+ src_file_basename, ds_name = src[-1].rsplit('_', 1)
+
+ fn = "%s/%s/%s.rrd" % (self.GRAPH_ROOT, '/'.join(src_root), src_file_basename)
+
+ return "DEF:ds%d=%s:%s:AVERAGE" % (idx, fn, ds_name)
+
+ def legend(self, sources):
+ if len(sources) == 1:
+ return self._svc([[sources[0][-1]]])
+ else:
+ _sources = ['/'.join([s.split('.')[1] if '.' in s else s for s in src[:-1]]) for src in sources]
+ common_root = os.path.commonprefix(_sources)
+ out = []
+ for idx,src in enumerate(_sources):
+ # just....don't ask
+ out.append([foo for foo in _sources[idx][len(common_root):].split('/') + [sources[idx][-1]] if foo])
+ return self._svc(out)
+
+ def title(self, sources):
+ if len(sources) == 1:
+ return [src.split('.')[1] for src in sources[0][:-1]]
+ else:
+ _sources = ['/'.join([s.split('.')[1] if '.' in s else s for s in src[:-1]]) for src in sources]
+ common_root = os.path.commonprefix(_sources)
+ return common_root.split('/')
View
112 firefly/data_sources/test_data.py
@@ -4,59 +4,59 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ DESC = "testing"
+
+ def list_path(self, path):
+ if not path:
+ return [{'type': 'file', 'name': 'test-data-plain'},
+ {'type': 'file', 'name': 'test-data-moving'},
+ {'type': 'file', 'name': 'test-data-discontinuous'}]
+
+ def graph(self):
+ raise NotImplemented
+
+ def data(self, sources, start, end, width):
+ # TODO (bstack): This needs to be done in a more clever way
+ span = end - start
+ data = []
+ sources = self._flat_sources(sources)
+ for x in xrange(span):
+ t = x + start
+ val = []
+ for source in sources:
+ if source == 'test-data-plain':
+ val.append(math.sin(x*math.pi/(span/4)))
+ if source == 'test-data-moving':
+ val.append(math.sin(t*math.pi/(span/4)))
+ if source == 'test-data-discontinuous':
+ sine = math.sin(1+x*math.pi/(span/4))
+ if -0.3 < sine < 0.3:
+ disc = None
+ else:
+ disc = sine/2
+ val.append(disc)
+ if any(val):
+ values_string = ",".join("%0.4f"%(v,) if v else "null" for v in val)
+ data.append('{"t":%d,"v":[%s]}' % (t, values_string))
+ return "[%s]" % ','.join(data)
+
+ def legend(self, sources):
+ sources = self._flat_sources(sources)
+ titles = []
+ for source in sources:
+ if source == 'test-data-plain':
+ titles.append([["test-data-plain"], "#ff0000"])
+ if source == 'test-data-moving':
+ titles.append([["test-data-moving"], "#ff0000"])
+ if source == 'test-data-discontinuous':
+ titles.append([["test-data-discontinuous"], "#ff0000"])
+ return titles
+
+ def _flat_sources(self, sources):
+ flat_sources = []
+ for slist in sources:
+ flat_sources.extend(slist)
+ return flat_sources
+
+ def title(self, sources):
+ return ["Test Data"]
View
404 firefly/main.py
@@ -1,202 +1,202 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+"""Responsible for starting up Firefly.
+
+python -m firefly.main
+
+[or]
+
+serviceinit.d/firefly start
+
+Configuration precedence works as follows:
+* If a configuration file is specified with -c, --config, that configuration file is loaded
+ and takes precedence over other configuration options
+* If a supported option is specified via a command line argument, that value is set
+* If neither of the above occurs, configuration is read from the default firefly.yaml config
+"""
+
+from collections import defaultdict
+import hashlib
+import logging
+from optparse import OptionGroup, OptionParser
+import os
+import socket
+import sys
+import util
+
+import tornado.ioloop
+import yaml
+
+from firefly.data_server import initialize_data_server
+from firefly.ui_server import initialize_ui_server
+
+logging.basicConfig(stream=sys.stdout, level=logging.INFO)
+log = logging.getLogger('firefly')
+
+def load_config_from_file(config_file):
+ with open(config_file, 'r') as f:
+ config = yaml.load(f)
+ return config
+
+if __name__ == "__main__":
+ try:
+ config = load_config_from_file("firefly.yaml")
+ except IOError:
+ # If the user doesn't have a local firefly.yaml, let's assume it will be passed with -c
+ # (or they will specify all the arguments on the command line...)
+ config = {}
+
+ default_data_server_opts = config.get('data_server', {})
+ default_ui_server_opts = config.get('ui_server', {})
+
+ parser = OptionParser()
+ data_server_group = OptionGroup(parser,
+ "Data Server Options",
+ "These options control various aspects of the data server, which serves data to the UI server.")
+ ui_server_group = OptionGroup(parser,
+ "UI Server Options",
+ "These options control various aspects of the UI server, which serves the frontend of Firefly.")
+
+ test_mode_help = """\
+Runs in test mode:
+1. Add a test data server and data source.
+2. Making code changes will automatically restart the server."""
+ parser.add_option('-c',
+ '--config',
+ dest='config_file',
+ default=None,
+ help="Specify a configuration file to read from.")
+ parser.add_option('--testing',
+ dest='testing',
+ action='store_true',
+ default=False,
+ help=test_mode_help)
+ parser.add_option('--no-data-server',
+ dest="omit_data_server",
+ action="store_true",
+ default=config.get('omit_data_server', False),
+ help="Disable the data server.")
+ parser.add_option('--no-ui-server',
+ dest="omit_ui_server",
+ action="store_true",
+ default=config.get('omit_ui_server', False),
+ help="Disable the UI server")
+
+ data_server_group.add_option('--dataserver-port',
+ dest='dataserver_port',
+ default=default_data_server_opts.get('port', 8890),
+ type=int,
+ help="The port for the dataserver to listen on")
+ data_server_group.add_option('--rrdcached-storage',
+ dest='rrdcached_storage',
+ default=default_data_server_opts.get('rrdcached_storage', None),
+ help='Base directory where rrdcached stores RRD files (skip when --testing)')
+ data_server_group.add_option('--rrdcached-socket',
+ dest='rrdcached_socket',
+ default=default_data_server_opts.get('rrdcached_socket', None),
+ help='Path to domain socket rrdcached is listening on (skip when --testing)')
+ data_server_group.add_option('--dataserver-db-file',
+ dest='dataserver_db_file',
+ default=default_data_server_opts.get('db_file', os.path.join('data', 'data_server.sqlite')),
+ help='SQLite database file to keep data server information in')
+
+ ui_server_group.add_option('--uiserver-port',
+ dest='uiserver_port',
+ default=default_ui_server_opts.get('port', 8889),
+ help='Port to listen on (default %default)')
+ ui_server_group.add_option('--uiserver-db-file',
+ dest='uiserver_db_file',
+ default=default_ui_server_opts.get('db_file', os.path.join('data', 'ui_server.sqlite')),
+ help='SQLite database file to keep UI server information in')
+ ui_server_group.add_option('--url-path-prefix',
+ dest='url_path_prefix',
+ default=default_ui_server_opts.get('url_path_prefix', '/firefly/'),
+ help="URL prefix to use")
+
+ parser.add_option_group(data_server_group)
+ parser.add_option_group(ui_server_group)
+
+ options, args = parser.parse_args()
+
+ # Make sure we don't get some weird conditions, like disabling
+ # both the UI and the data server
+ if options.omit_data_server and options.omit_ui_server:
+ parser.error("--no-data-server and --no-ui-server both specified!")
+
+ if options.config_file:
+ config = load_config_from_file(options.config_file)
+ else:
+ config = {
+ "data_server": {
+ "port": options.dataserver_port,
+ "rrdcached_storage": options.rrdcached_storage,
+ "rrdcached_socket": options.rrdcached_socket,
+ "data_sources": config.get("data_server", {}).get("data_sources", []),
+ "data_source_config": config.get("data_server", {}).get("data_source_config", defaultdict(dict)),
+ "db_file": options.dataserver_db_file
+ },
+ "ui_server": {
+ "port": options.uiserver_port,
+ "data_servers": config.get("ui_server", {}).get("data_servers", []),
+ "db_file": options.uiserver_db_file,
+ "url_path_prefix": options.url_path_prefix
+ }
+ }
+
+ config["testing"] = options.testing
+ config["data_server"]["data_sources_by_key"] = {}
+
+ if options.config_file is None:
+ config["config_file"] = "firefly.yaml"
+ else:
+ config["config_file"] = options.config_file
+
+ if config["testing"]:
+ config['data_server']['data_sources'].append('data_sources.test_data.TestData')
+
+ # if we're testing and not behind a reverse proxy (apache), make the base
+ # url / instead of /firefly to compensate for apache url rewriting
+ config['ui_server']['url_path_prefix'] = '/'
+
+ # Turn on automatic code reloading
+ config["data_server"]['debug'] = True
+ config["ui_server"]['debug'] = True
+
+ log.info("Running in TEST MODE")
+
+ if "data_servers" not in config['ui_server']:
+ config['ui_server']['data_servers'] = []
+
+ if not options.omit_data_server:
+ config['ui_server']['data_servers'].append({
+ 'name': 'http://%s:%s' % (socket.getfqdn(), config["data_server"]["port"]),
+ 'desc': socket.getfqdn()
+ })
+
+ data_sources = []
+
+ # mix in the configured data sources to the data server configuration
+ def get_ds_instance(ds):
+ ds_class = util.import_module_class(ds)
+ ds_kwargs = config['data_server']['data_source_config'].get(ds, {})
+ ds_instance = ds_class(**ds_kwargs) # args only used by StatMonsterRRD atm
+ key = hashlib.sha1(ds).hexdigest()[:6]
+ ds_instance._FF_KEY = key
+ config['data_server']['data_sources_by_key'][key] = ds_instance
+ return ds_instance
+
+ for ds in config['data_server']['data_sources']:
+ ds_instance = get_ds_instance(ds)
+ data_sources.append(ds_instance)
+ log.debug('Using datasource %s' % type(ds_instance).__name__)
+
+ config["data_server"]["data_sources"] = data_sources
+
+ if not options.omit_data_server:
+ # Allow the data server to initialize itself and attach itself to the IOLoop
+ initialize_data_server(config["data_server"], secret_key=config["secret_key"], ioloop=tornado.ioloop.IOLoop.instance())
+
+ if not options.omit_ui_server:
+ # Allow the UI server to initialize itself and attach itself to the IOLoop
+ initialize_ui_server(config["ui_server"], secret_key=config["secret_key"], ioloop=tornado.ioloop.IOLoop.instance())
+
+ # Kick everything off
+ tornado.ioloop.IOLoop.instance().start()
View
BIN  firefly/static/fonts/Inconsolata.ttf
Binary file not shown
View
BIN  firefly/static/img/stripez4.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View
10 firefly/static/js/graph_dash.js
@@ -243,6 +243,7 @@
+ return {'columnCount': this.view.columnCount, 'rows': rows};
@@ -268,6 +269,7 @@
+ var serialized = this.getIsolatedSerial(graph);
@@ -278,6 +280,7 @@
+ return {'columnCount': 1, 'rows': rows};
@@ -434,14 +437,11 @@
+ url: instance.controller.makeURL_('shorten'),
-
-
-
-
-
+ data: JSON.stringify(serial),
View
58 firefly/static/js/renderer.js
@@ -253,6 +253,7 @@
+ }
@@ -376,16 +377,21 @@
+ var annotations = div.select(".annotations").selectAll(".annotation").data(data.annotations);
+ annotations.enter().append("svg:line").attr("class", "annotation").attr('data-id', function(d){ return d.id; });
+ var annotation_tooltips = div.selectAll(".annotation-tooltip").data(data.annotations);
+ .attr('data-id', function(d){ return d.id; })
+ .attr('title', function(d){ return d.description; });
@@ -485,8 +491,16 @@
+ .y(function(d) {
+ if (d.y !== null) {
+ return that.yScale(d.y + d.y0);
+ }
+ else {
+ return d.y;
+ }
+ });
@@ -586,14 +600,19 @@
+ .attr("x1", function(d){ return renderer.xScale(d.time); })
+ .attr("x2", function(d){ return renderer.xScale(d.time); });
+ var label = d.type;
+ .style('top', function(d) { return (renderer._pickAnnotationToolTipLocation(d, this, data)[1]) + 'px'; })
+ .style('left', function(d) { return (renderer._pickAnnotationToolTipLocation(d, this, data)[0]) + 'px'; });
@@ -659,10 +678,13 @@
+ /**
+ *
+ */
@@ -675,6 +697,7 @@
+ }
@@ -713,6 +736,9 @@
+ }
+ }
+ }
@@ -733,9 +759,11 @@
+ }
+ }
@@ -784,35 +812,7 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ }
View
122 firefly/static/js/renderer_worker.js
@@ -3,6 +3,7 @@
+var annotationsXHR;
@@ -21,12 +22,16 @@
+ annotationsXHR && annotationsXHR.abort();
+ if (data.options.show_annotations) {
+ annotationsXHR = fetchAnnotations(data.start, data.end);
+ }
@@ -44,18 +49,62 @@
+function fetchAnnotations(start, end) {
+ var xhr = new XMLHttpRequest();
+ var url = data.dataServer + "/annotations?" +
+ "sources=" + encodeURIComponent(JSON.stringify(data.sources)) +
+ "&start=" + (start - 60) + // buffer for one minute
+ "&end=" + end +
+ "&width=" + data.width +
+ "&token=" + data.token;
+ xhr.open("GET", url, true);
+ xhr.onreadystatechange = handleResponse;
+ xhr.send(null);
+ return xhr;
+ // handleResponse will get called again if the annotations XHR isn't ready and we want annotations data
+ // therefore we can skip this.
+ if (!data.options.show_annotations || annotationsXHR.readyState === 4){
+ if (data.options.overlay_previous_period) {
+ if (currentXHR.readyState === 4 && previousXHR.readyState === 4) {
+ if (currentXHR.status === 200 && previousXHR.status === 200) {
+ var currentData = JSON.parse(currentXHR.responseText);
+ var previousData = JSON.parse(previousXHR.responseText);
+ var annotationsData = [];
+ if (data.options.show_annotations && annotationsXHR.status === 200){
+ annotationsData = JSON.parse(annotationsXHR.responseText);
+ }
+ processData(currentData, previousData, annotationsData);
+ } else {
+ // status code 0 means aborted
+ if (currentXHR.status > 0 || previousXHR.status > 0) {
+ throw "Error: received " + currentXHR.status + ", " + previousXHR.status;
+ }
+ } else {
+ if (currentXHR.readyState === 4) {
+ if (currentXHR.status === 200) {
+ var currentData = JSON.parse(currentXHR.responseText);
+ var annotationsData = [];
+ if (data.options.show_annotations && annotationsXHR.status === 200){
+ annotationsData = JSON.parse(annotationsXHR.responseText);
+ }
+ processData(currentData, [], annotationsData);
+ } else {
+ if (currentXHR.status > 0) {
+ throw "Error: received " + currentXHR.status;
+ }
+function processData(currentData, previousData, annotationsData) {
@@ -98,6 +147,16 @@
+ // restructure annotations so we have the correct types for all the data
+ var annotations = []
+ for(idx in annotationsData){
+ annotations.push({
+ id: parseInt(annotationsData[idx].id),
+ type: annotationsData[idx].type,
+ description: annotationsData[idx].description,
+ time: parseFloat(annotationsData[idx].time) * 1000
+ })
+ }
@@ -108,66 +167,7 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ "previousLayers" : previousLayers,
+ "annotations" : annotations
View
122 firefly/ui_server.py
@@ -4,6 +4,7 @@
+import util
@@ -13,84 +14,83 @@
+ """Serves the basic dashboard page"""
+ def get(self):
+ """Handle the default case of just hitting the index."""
+ embed = False
+ if self.get_argument('embed', '') == 'true':
+ embed = True
+ env = {
+ 'url_path_prefix': self.application.settings['url_path_prefix'],
+ 'data_servers': self.application.settings['data_servers'],
+ 'embedded': embed}
+ self.render("templates/index.html", **env)
+ """Generate tokens"""
+ def get(self):
+ self.set_header("Content-Type", "text/plain")
+ self.write(util.generate_access_token(self.application.settings['secret_key']))
+ """Stores state data and returns an ID for later retrieval"""
+ def post(self):
+ conn = self.application.settings['db_connection']
+ state = unicode(self.request.body, 'utf_8')
+ state_hash = buffer(hashlib.sha1(state.encode('utf_8')).digest())
+ row = conn.execute("select id from states where state_hash=?", (state_hash,)).fetchone()
+ stateid = row[0] if row else conn.execute("insert into states(state, state_hash) values (?, ?)", (state, state_hash)).lastrowid
+ self.set_header("Content-Type", "text/plain")
+ self.write(util.b58encode(stateid))
+ """Retrieves state data given an ID"""
+ def get(self, b58id):
+ conn = self.application.settings['db_connection']
+ try:
+ stateid = util.b58decode(b58id)
+ except:
+ raise tornado.web.HTTPError(404)
+ row = conn.execute("select state from states where id=?", (stateid,)).fetchone()
+ if row:
+ self.set_header("Content-Type", "application/json; charset=UTF-8")
+ self.write(row[0])
+ else:
+ raise tornado.web.HTTPError(404)
+def initialize_ui_server(config, secret_key=None, ioloop=None):
+ if not ioloop:
+ ioloop = tornado.ioloop.IOLoop.instance()
+ # connect to the database
+ conn = sqlite3.connect(config['db_file'], isolation_level=None)
+ conn.execute("create table if not exists states (id integer primary key autoincrement, state text not null, state_hash blob not null)")
+ conn.execute("create index if not exists hash_idx on states(state_hash)")
+ config["static_path"] = os.path.join(os.path.join(*os.path.split(__file__)[:-1]), 'static')
+ config["db_connection"] = conn
+ config["static_url_prefix"] = os.path.join(config["url_path_prefix"], "static") + "/"
+ config["secret_key"] = secret_key
+ # init the application instance
+ application = tornado.web.Application([
+ (r"/", IndexHandler),
+ (r"/token", TokenHandler),
+ (r"/shorten", ShortenHandler),
+ (r"/expand/(.*)", ExpandHandler),
+ (r"/static/(.*)", tornado.web.StaticFileHandler, {"path": config['static_path']}),
+ ], **config)
+ # start the main server
+ http_server = tornado.httpserver.HTTPServer(application, io_loop=ioloop)
+ http_server.listen(config["port"])
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ log.info('Firefly UI server started on port %d' % config["port"])
View
96 firefly/util.py
@@ -8,17 +8,60 @@
+ """Split a string on the last dot.
+ 'aaa.bbb.ccc' => ('aaa.bbb', 'ccc')
+ 'aaa' => ('', 'aaa')
+ """
+ matches = last_dot_splitter_re.findall(dotted_path)
+ return matches[0][1], matches[0][2]
+ """Import a module + class path like 'a.b.c.d' => d attribute of c module"""
+ module_name, class_name = last_dot_splitter(dotted_path)
+ mod = import_module(module_name)
+ try:
+ attr = getattr(mod, class_name)
+ except AttributeError:
+ raise AttributeError("Module %r has no class %r" % (mod, class_name))
+ return attr
+ """Import a module path like 'a.b.c' => c module"""
+ mod = __import__(dotted_path, globals(), locals(), [])
+ for name in dotted_path.split('.')[1:]:
+ try:
+ mod = getattr(mod, name)
+ except AttributeError:
+ raise AttributeError("Module %r has no attribute %r" % (mod, name))
+ return mod
+ """Sets up logging to stdout for a service script."""
+ log = logging.getLogger(logger_name)
+ log.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ handler.setLevel(logging.INFO)
+ handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
+ log.addHandler(handler)
+ return log
+def verify_access_token(token, key):
+ """Verify that the given access token is still valid. Returns true if it is,
+ false if it either failed to validate or has expired.
+ A token is a combination of a unix timestamp and a signature"""
+ t = token[:15]
+ signature = token[15:]
+ expected_signature = hmac.new(key, msg=t, digestmod=hashlib.sha1).hexdigest()
+ return signature == expected_signature and int(t) >= int(time.time())
+def generate_access_token(key, duration=60):
+ """Generate an access token valid for the given number of seconds"""
+ t = '%015d' % int(time.time() + duration)
+ signature = hmac.new(key, msg=t, digestmod=hashlib.sha1).hexdigest()
+ return t + signature
@@ -30,6 +73,9 @@
+ div, mod = divmod(value, b58base)
+ encoded = b58chars[mod] + encoded
+ value = div
@@ -39,52 +85,6 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+ value += b58chars.index(c) * multiplier
+ multiplier *= b58base
View
36 setup.py
@@ -1,18 +1,18 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+from distutils.core import setup
+
+setup(
+ name='firefly',
+ version='1.0',
+ provides=['firefly'],
+ author='Yelp',
+ description='A multi-datacenter graphing tool',
+ packages=['firefly'],
+ long_description="""Firefly provides graphing of performance metrics from multiple data centers and sources.
+ Firefly works with both the Ganglia and Statmonster data sources.
+ """,
+ requires=[
+ "tornado >= 1.1",
+ "pyyaml >= 3.09",
+ "python-rrdtool >= 1.4.7"
+ ]
+)
Please sign in to comment.
Something went wrong with that request. Please try again.