Skip to content

Commit

Permalink
Externalise system configs so we now no longer need to use localhost as
Browse files Browse the repository at this point in the history
a hardcoded host in the redis backend.  Additionally, the ability to
select between sqlite and redis backends has been commited
  • Loading branch information
mbroome committed Jun 6, 2011
1 parent e5abebb commit 342e9fb
Show file tree
Hide file tree
Showing 12 changed files with 75 additions and 21 deletions.
12 changes: 10 additions & 2 deletions INSTALL
Expand Up @@ -2,14 +2,22 @@ Installing ogslb

Requirements:
python
powerdns

Optional:
redis
python-redis
powerdns

Install:
By default, ogslb lives under /opt/ogslb but it can be placed anywhere. Simply edit 'etc/config.xml' to point to the correct paths.

Redis needs to be running on 'localhost'.
There are two different backend databases ogslb is able to use: redis and sqlite. The redis database has been tested the longest but the sqlite backend removed dependancies on additional applications. Right now, the selection of the backend is very rudimentary (though this should change as additional backends are more tested). A pair of symlinks are needed to select a particular backend. Under the lib director, TimeSeries.py and Stats.py need to point to the correct version:

ln -s TimeSeries_redis.py TimeSeries.py
ln -s Stats_redis.py Stats.py

Additional configuration for backends is managed in 'etc/config.xml' under the <BACKEND> section.


Powerdns needs to be configured to use the ogslb backend. Typically, ogslb will be configured to handle requests first in powerdns and if it doesn't have any matching data, the request will fall through to the next available backend such as bind style zone files. Additionally, you will typically want to disable caching of dns records in powerdns to make sure you have the latest version of data from ogslb.

Expand Down
6 changes: 4 additions & 2 deletions README
Expand Up @@ -11,7 +11,7 @@ Only need to configure addresses that need healthchecking. If ogslb does not ow
Ability to create primary/secondary configuration with automatic dns failover.

Metrics:
Metrics within oGSLB are collected via custom python modules used to collect data about a destination. All such metrics have a default 10sec timeout on the test. These modules live in the proto directory and are named after the kind of test they perform. Metrics are stored in a redis database for use by the pdns backend process when filling dns requests. Currently, the metrics oGSLB understands how to collect include:
Metrics within oGSLB are collected via custom python modules used to collect data about a destination. All such metrics have a default 10sec timeout on the test. These modules live in the proto directory and are named after the kind of test they perform. Metrics are stored in a database for use by the pdns backend process when filling dns requests. Currently, the metrics oGSLB understands how to collect include:

DUMMY Allow for an arbitrary value to be forced. No actual remote test is performed.
HTTP Http check including content matching of responses as well as simple status code tests.
Expand All @@ -23,8 +23,10 @@ Priorities:
oGSLB associates priorities with each of the collected metrics. Priorities are user defined in the poller.xml configuration file. Priorities allow for oGSLB to rank destinations to help make the best selection possible for a dns response to a client.

Traffic Balancing:
oGSLB uses the combination of metrics and thier associated priorities to rank destinations. Each metric about a destination includes the priority of that metric. When a dns request is being filled, the last 120seconds of metrics about the hostname is pulled from redis. For each ip address that could be used for the given name, all of the priority numbers are added up. Once each ip address has it's total priority number calculated, a random selection is made from the highest priority number. That selection is the ip address returned to the client as the destination for the requested hostname.
oGSLB uses the combination of metrics and thier associated priorities to rank destinations. Each metric about a destination includes the priority of that metric. When a dns request is being filled, the last 120seconds of metrics about the hostname is pulled from the database. For each ip address that could be used for the given name, all of the priority numbers are added up. Once each ip address has it's total priority number calculated, a random selection is made from the highest priority number. That selection is the ip address returned to the client as the destination for the requested hostname.

Backend Databases:
oGSLB has the ability to use either redis or sqlite as a database with additional database support comming in the future.

Mitchell Broome
mitchell.broome@gmail.com
Expand Down
14 changes: 11 additions & 3 deletions bin/backend.py
Expand Up @@ -25,6 +25,7 @@
sys.path.append(scriptPath + '/..')
sys.path.append(scriptPath + '/../lib')

import ParseConfig
from TimeSeries import *
import time
import logging
Expand All @@ -34,13 +35,13 @@
pp = pprint.PrettyPrinter(indent=4)

logger = logging.getLogger("ogslb")
db = TimeSeries()
#db = TimeSeries()

# backend.py is the python backend that is called by PowerDNS and talks to redis.
# We listen to stdin and talk on stdout which effectivly attaches us to PowerDNS.

# Here, we actually talk to redis and do our prioritizing of healthy services
def DNSLookup(query):
def DNSLookup(db, query):
"""parse DNS query and produce lookup result."""

(_type, qname, qclass, qtype, _id, ip) = query
Expand Down Expand Up @@ -120,6 +121,12 @@ def main():
logFile = '/tmp/backend-%d.log' % pid
debug = 1

# define some defaults
configFile = scriptPath + '/../etc/config.xml'

# load up the configs
Config = ParseConfig.parseConfig(configFile);

# setup the logger
if(debug):
logger.setLevel(logging.DEBUG)
Expand All @@ -141,6 +148,7 @@ def main():
logger.info('startup')

first_time = True
db = TimeSeries(Config)

# here, we have to deal with how to talk to PowerDNS.
while 1: # loop forever reading from PowerDNS
Expand Down Expand Up @@ -173,7 +181,7 @@ def main():
logger.debug('Performing DNSLookup(%s)' % repr(query))
lookup = ''
# Here, we actually to the real work. Lookup a hostname in redis and prioritize it
lookup = DNSLookup(query)
lookup = DNSLookup(db, query)
if lookup != '':
logger.debug(lookup)
fprint(lookup)
Expand Down
14 changes: 10 additions & 4 deletions bin/cli.py
Expand Up @@ -28,16 +28,17 @@
import getopt

from time import *
import ParseConfig
from TimeSeries import *
from Stats import *
import pprint
import random

pp = pprint.PrettyPrinter(indent=4)

def getData(fields):
t = TimeSeries()
s = Stats()
def getData(Config, fields):
t = TimeSeries(Config)
s = Stats(Config)

r = s.sget("stats.hostlist")

Expand Down Expand Up @@ -84,6 +85,8 @@ def usage():


def main(argv):
# define some defaults
configFile = scriptPath + '/../etc/config.xml'

# parse the command line arguments
try:
Expand All @@ -103,8 +106,11 @@ def main(argv):
elif opt == '-n':
name = arg

# load up the configs
Config = ParseConfig.parseConfig(configFile);

fields = field.split(',');
getData(fields)
getData(Config, fields)


if __name__ == "__main__":
Expand Down
3 changes: 2 additions & 1 deletion bin/ogslbd
Expand Up @@ -90,6 +90,7 @@ ogslbd [-c=/path/to/config.xml] [-l=/path/to/logfile] [-d] [-?]

# the main program
def main(argv):
global maxResponder
signal.signal(signal.SIGINT, signal_handler)

# define some defaults
Expand Down Expand Up @@ -155,7 +156,7 @@ def main(argv):

# for each of the hostnames we are monitoring, we stick the name in redis
# that way, we can associate which timeseries keys we are monitoring
stats = Stats();
stats = Stats(Config);
stats.sexpire("stats.hostlist")
for v in vips:
stats.sput("stats.hostlist", v)
Expand Down
6 changes: 5 additions & 1 deletion etc/config.xml
Expand Up @@ -2,11 +2,15 @@
<OGSLB>

<CONFIG
redishost="localhost"
logfile="/var/log/ogslb.log"
pollerxml="/opt/ogslb/etc/poller.xml"
protodir="/opt/ogslb/proto"
/>

<BACKEND
type="redis"
host="localhost"
/>

</OGSLB>

10 changes: 10 additions & 0 deletions lib/ParseConfig.py
Expand Up @@ -36,12 +36,22 @@ def getText(nodelist):
def parseConfig(filename='config.xml'):
dom = xml.dom.minidom.parse(filename);
config = {}
backend = {}
try:
dbc = dom.getElementsByTagName('CONFIG')[0]
for a in dbc.attributes.keys():
config[a] = dbc.attributes[a].value
except:
logger.debug("error getting config")

try:
dbc = dom.getElementsByTagName('BACKEND')[0]
for a in dbc.attributes.keys():
backend[a] = dbc.attributes[a].value
config['backend'] = backend
except:
logger.debug("error getting config")


return(config)

6 changes: 4 additions & 2 deletions lib/Responder.py
Expand Up @@ -33,12 +33,14 @@
class Responder(threading.Thread):
def __init__(self, queue, Config, threadID):
self.__queue = queue
self.Config = Config
self._config = Config
self.threadName = "responder-" + str(threadID)
threading.Thread.__init__(self, name=self.threadName)

logger.debug("responder started: %s" % self.threadName)

# setup the time series connection
self._db = TimeSeries()
self._db = TimeSeries(self._config)


def run(self):
Expand Down
3 changes: 2 additions & 1 deletion lib/Stats_redis.py
Expand Up @@ -30,7 +30,8 @@
# TimeSeries which is collected from responses to tests. Stats is used
# for things such as collecting info about how ogslb it's self is doing
class Stats:
def __init__(self):
def __init__(self, Config):
self._config = Config
self._db = redis.Redis('localhost')

def sput(self, key, value):
Expand Down
3 changes: 2 additions & 1 deletion lib/Stats_sqlite.py
Expand Up @@ -33,7 +33,8 @@
# TimeSeries which is collected from responses to tests. Stats is used
# for things such as collecting info about how ogslb it's self is doing
class Stats:
def __init__(self):
def __init__(self, Config):
self._config = Config
self._db = sqlite3.connect('/var/tmp/ogslb-stats.db')
try:
self._setupDB()
Expand Down
15 changes: 13 additions & 2 deletions lib/TimeSeries_redis.py
Expand Up @@ -20,11 +20,15 @@
import ast
from time import time
import pprint
import logging


BackendType = "redis"

# setup pprint for debugging
pp = pprint.PrettyPrinter(indent=4)
# and setup the logger
logger = logging.getLogger("ogslb")


# the TimeSeries class is used to abstract how we deal with data
Expand All @@ -35,8 +39,15 @@
# window of time
class TimeSeries:

def __init__(self):
self._db = redis.Redis('localhost')
def __init__(self, Config):
host = ''
self._config = Config
try:
host = self._config['backend']['host']
except:
host = 'localhost'
logger.debug("host: %s" % host)
self._db = redis.Redis(host)

def zput(self, key, value, when):
self._db.zadd(key, value, when)
Expand Down
4 changes: 2 additions & 2 deletions lib/TimeSeries_sqlite.py
Expand Up @@ -40,8 +40,8 @@
# window of time
class TimeSeries:

def __init__(self):
logger.debug("responder startup")
def __init__(self, Config):
self._config = Config
self._db = sqlite3.connect('/var/tmp/ogslb-timeseries.db', check_same_thread = False)
try:
self._setupDB()
Expand Down

0 comments on commit 342e9fb

Please sign in to comment.