Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Django application to store log messages on Cassandra
Python JavaScript Shell
tag: v0.0.7

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
dev-scripts
src-client
src/hgdeoro
.gitignore
.project
.pydevproject
LICENSE.txt
MANIFEST
README.md
TODO.md
daedalus_version.py
pydev_manage.py
requirements-dev.txt
requirements.txt
setup.py

README.md

Daedalus

Daedalus is a Django application to store log messages on Cassandra.

There's a basic wiki at github. This project is al alpha-quality stage, not recommended to be used on production systems. It's developed on Ubuntu 12.04, and tested on CentOS 6 and Ubuntu 12.04 LTS Server virtual machines (with the help of fabric). Please, report any issue here.

To install and run Daedalus server using pip, (assuming a virtualenv directory exists and Cassandra running on localhost) use:

$ ./virtualenv/bin/pip install daedalus
$ export DJANGO_SETTINGS_MODULE=hgdeoro.daedalus.settings
$ ./virtualenv/bin/django-admin.py runserver

To install only the Python client (and logging handler), run:

$ pip install daedalus-python-client

Implemented functional use cases

  1. Backend: Receive log messages using HTTP

  2. Frontend: Show messages

    • Filter by application
    • Filter by host
    • Filter by severity
    • Show all messages (default for home page)
    • Simplest form of pagination
    • Show line chart counting messages received
  3. Client: Python client to send messages using HTTP

  4. Client: logging handler to integrate to the Python's logging framework

  5. Client: Java client and log4j appender to send messages using HTTP.

For the absolutely newby: how to install Cassandra + Daedalus in Ubuntu

See dev-scripts/install-on-ubuntu.sh.

I recommend you to download and run the script in a newly created virtual machine (mainly because it install many packages, and must be run as root). The virtual machine should have at least 1GB of RAM (Cassandra may not work with less memory). The scripts installs JDK, Cassandra, clones the Daedalus repository and launch the Django development server.

You can download this script and run it as root, or use it as a guide, copying-and-pasting each of the commans of the script in a console or ssh session.

For developers: how to download and hack

JDK: download and install JDK 6 (needed for Cassandra).

Cassandra: download, install and start Cassandra.

Download from GitHub

$ git clone http://github.com/hgdeoro/daedalus
$ cd daedalus

Create virtualenv and install requeriments

$ virtualenv virtualenv
$ ./virtualenv/bin/pip install -r requirements.txt
$ ./virtualenv/bin/pip install -r requirements-dev.txt

Run syncdb and syncdb_cassandra

$ ./dev-scripts/manage.sh syncdb
$ ./dev-scripts/manage.sh syncdb_cassandra

Start memchaed

$ sudo service memcached start

Start development server

$ ./dev-scripts/runserver.sh

By now you'll have the Django development server running. Both the Daedalus backend (the Django app that receives the logs via HTTP) and the frontend (the application used to see the logs) are started. To use it, go to http://127.0.0.1:8084/.

To create some random log messages, you could run:

$ ./dev-scripts/bulk_save_random_messages.sh

(and press Ctrl+C twice to stop it).

The project could be imported from within Eclipse PyDev.

Current iteration goals

Not implemented right now / Ideas / TODOs

See TODOs.

General architecture

  • Client + server app.

  • Client:

    • Thin layer over a http client to send the messages
  • Server:

    • Backend: Django app that receives the messages.
    • Frontend: Django app for viewing the messages.
  • Messages sent over HTTP

Cassandra

  • A single keyspace holds all the column families.

  • Messages are stored using 4 column families:

    • CF: Logs - Cols[]: { uuid1/timestamp: JSON encodded message }
    • CF: Logs_by_app - Cols[]: { uuid1/timestamp: JSON encodded message }
    • CF: Logs_by_host - Cols[]: { uuid1/timestamp: JSON encodded message }
    • CF: Logs_by_severity - Cols[]: { uuid1/timestamp: JSON encodded message }
  • Alternative format (implemented by StorageService2):

    • CF: Logs - Cols[]: { uuid1/timestamp: JSON encodded message }
    • CF: Logs_by_app - Cols[]: { uuid1/timestamp: '' }
    • CF: Logs_by_host - Cols[]: { uuid1/timestamp: '' }
    • CF: Logs_by_severity - Cols[]: { uuid1/timestamp: '' }
  • No SuperColumn nor secondary indexes by now.

Glosary

  • Log message: structure containing:
    • message
    • application
    • host
    • severity
    • timestamp (Cassandra)

Changelog

v0.0.7

  • Created a command line to send log events
  • Created a handler for the Python logging framework
  • Fixed various issues around setup.py and created setup.py for the Python client.
  • Now the Daedalus server and Python client are uploaded to PYPI

v0.0.6

  • Many enhacements on fabric scripts and new tasks: daedalus_syncdb(), gunicorn_launch()
  • Created dev-scripts/install-on-ubuntu.sh to document and automatize installation on Ubuntu
  • Updated scripts on dev-scripts/ to automatically use virtualenv if exists

License

#-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#    daedalus - Centralized log server
#    Copyright (C) 2012 - Horacio Guillermo de Oro <hgdeoro@gmail.com>
#
#    This file is part of daedalus.
#
#    daedalus is free software; you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation version 2.
#
#    daedalus is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License version 2 for more details.
#
#    You should have received a copy of the GNU General Public License
#    along with daedalus; see the file LICENSE.txt.
#-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Something went wrong with that request. Please try again.