Skip to content
Browse files

Externalise system configs so we now no longer need to use localhost as

a hardcoded host in the redis backend.  Additionally, the ability to
select between sqlite and redis backends has been commited
  • Loading branch information...
1 parent e5abebb commit 342e9fbf1d7b40c250db2aa32b0d572f5a79e099 @mbroome committed Jun 5, 2011
Showing with 75 additions and 21 deletions.
  1. +10 −2 INSTALL
  2. +4 −2 README
  3. +11 −3 bin/backend.py
  4. +10 −4 bin/cli.py
  5. +2 −1 bin/ogslbd
  6. +5 −1 etc/config.xml
  7. +10 −0 lib/ParseConfig.py
  8. +4 −2 lib/Responder.py
  9. +2 −1 lib/Stats_redis.py
  10. +2 −1 lib/Stats_sqlite.py
  11. +13 −2 lib/TimeSeries_redis.py
  12. +2 −2 lib/TimeSeries_sqlite.py
View
12 INSTALL
@@ -2,14 +2,22 @@ Installing ogslb
Requirements:
python
+powerdns
+
+Optional:
redis
python-redis
-powerdns
Install:
By default, ogslb lives under /opt/ogslb but it can be placed anywhere. Simply edit 'etc/config.xml' to point to the correct paths.
-Redis needs to be running on 'localhost'.
+There are two different backend databases ogslb is able to use: redis and sqlite. The redis database has been tested the longest but the sqlite backend removed dependancies on additional applications. Right now, the selection of the backend is very rudimentary (though this should change as additional backends are more tested). A pair of symlinks are needed to select a particular backend. Under the lib director, TimeSeries.py and Stats.py need to point to the correct version:
+
+ln -s TimeSeries_redis.py TimeSeries.py
+ln -s Stats_redis.py Stats.py
+
+Additional configuration for backends is managed in 'etc/config.xml' under the <BACKEND> section.
+
Powerdns needs to be configured to use the ogslb backend. Typically, ogslb will be configured to handle requests first in powerdns and if it doesn't have any matching data, the request will fall through to the next available backend such as bind style zone files. Additionally, you will typically want to disable caching of dns records in powerdns to make sure you have the latest version of data from ogslb.
View
6 README
@@ -11,7 +11,7 @@ Only need to configure addresses that need healthchecking. If ogslb does not ow
Ability to create primary/secondary configuration with automatic dns failover.
Metrics:
-Metrics within oGSLB are collected via custom python modules used to collect data about a destination. All such metrics have a default 10sec timeout on the test. These modules live in the proto directory and are named after the kind of test they perform. Metrics are stored in a redis database for use by the pdns backend process when filling dns requests. Currently, the metrics oGSLB understands how to collect include:
+Metrics within oGSLB are collected via custom python modules used to collect data about a destination. All such metrics have a default 10sec timeout on the test. These modules live in the proto directory and are named after the kind of test they perform. Metrics are stored in a database for use by the pdns backend process when filling dns requests. Currently, the metrics oGSLB understands how to collect include:
DUMMY Allow for an arbitrary value to be forced. No actual remote test is performed.
HTTP Http check including content matching of responses as well as simple status code tests.
@@ -23,8 +23,10 @@ Priorities:
oGSLB associates priorities with each of the collected metrics. Priorities are user defined in the poller.xml configuration file. Priorities allow for oGSLB to rank destinations to help make the best selection possible for a dns response to a client.
Traffic Balancing:
-oGSLB uses the combination of metrics and thier associated priorities to rank destinations. Each metric about a destination includes the priority of that metric. When a dns request is being filled, the last 120seconds of metrics about the hostname is pulled from redis. For each ip address that could be used for the given name, all of the priority numbers are added up. Once each ip address has it's total priority number calculated, a random selection is made from the highest priority number. That selection is the ip address returned to the client as the destination for the requested hostname.
+oGSLB uses the combination of metrics and thier associated priorities to rank destinations. Each metric about a destination includes the priority of that metric. When a dns request is being filled, the last 120seconds of metrics about the hostname is pulled from the database. For each ip address that could be used for the given name, all of the priority numbers are added up. Once each ip address has it's total priority number calculated, a random selection is made from the highest priority number. That selection is the ip address returned to the client as the destination for the requested hostname.
+Backend Databases:
+oGSLB has the ability to use either redis or sqlite as a database with additional database support comming in the future.
Mitchell Broome
mitchell.broome@gmail.com
View
14 bin/backend.py
@@ -25,6 +25,7 @@
sys.path.append(scriptPath + '/..')
sys.path.append(scriptPath + '/../lib')
+import ParseConfig
from TimeSeries import *
import time
import logging
@@ -34,13 +35,13 @@
pp = pprint.PrettyPrinter(indent=4)
logger = logging.getLogger("ogslb")
-db = TimeSeries()
+#db = TimeSeries()
# backend.py is the python backend that is called by PowerDNS and talks to redis.
# We listen to stdin and talk on stdout which effectivly attaches us to PowerDNS.
# Here, we actually talk to redis and do our prioritizing of healthy services
-def DNSLookup(query):
+def DNSLookup(db, query):
"""parse DNS query and produce lookup result."""
(_type, qname, qclass, qtype, _id, ip) = query
@@ -120,6 +121,12 @@ def main():
logFile = '/tmp/backend-%d.log' % pid
debug = 1
+ # define some defaults
+ configFile = scriptPath + '/../etc/config.xml'
+
+ # load up the configs
+ Config = ParseConfig.parseConfig(configFile);
+
# setup the logger
if(debug):
logger.setLevel(logging.DEBUG)
@@ -141,6 +148,7 @@ def main():
logger.info('startup')
first_time = True
+ db = TimeSeries(Config)
# here, we have to deal with how to talk to PowerDNS.
while 1: # loop forever reading from PowerDNS
@@ -173,7 +181,7 @@ def main():
logger.debug('Performing DNSLookup(%s)' % repr(query))
lookup = ''
# Here, we actually to the real work. Lookup a hostname in redis and prioritize it
- lookup = DNSLookup(query)
+ lookup = DNSLookup(db, query)
if lookup != '':
logger.debug(lookup)
fprint(lookup)
View
14 bin/cli.py
@@ -28,16 +28,17 @@
import getopt
from time import *
+import ParseConfig
from TimeSeries import *
from Stats import *
import pprint
import random
pp = pprint.PrettyPrinter(indent=4)
-def getData(fields):
- t = TimeSeries()
- s = Stats()
+def getData(Config, fields):
+ t = TimeSeries(Config)
+ s = Stats(Config)
r = s.sget("stats.hostlist")
@@ -84,6 +85,8 @@ def usage():
def main(argv):
+ # define some defaults
+ configFile = scriptPath + '/../etc/config.xml'
# parse the command line arguments
try:
@@ -103,8 +106,11 @@ def main(argv):
elif opt == '-n':
name = arg
+ # load up the configs
+ Config = ParseConfig.parseConfig(configFile);
+
fields = field.split(',');
- getData(fields)
+ getData(Config, fields)
if __name__ == "__main__":
View
3 bin/ogslbd
@@ -90,6 +90,7 @@ ogslbd [-c=/path/to/config.xml] [-l=/path/to/logfile] [-d] [-?]
# the main program
def main(argv):
+ global maxResponder
signal.signal(signal.SIGINT, signal_handler)
# define some defaults
@@ -155,7 +156,7 @@ def main(argv):
# for each of the hostnames we are monitoring, we stick the name in redis
# that way, we can associate which timeseries keys we are monitoring
- stats = Stats();
+ stats = Stats(Config);
stats.sexpire("stats.hostlist")
for v in vips:
stats.sput("stats.hostlist", v)
View
6 etc/config.xml
@@ -2,11 +2,15 @@
<OGSLB>
<CONFIG
- redishost="localhost"
logfile="/var/log/ogslb.log"
pollerxml="/opt/ogslb/etc/poller.xml"
protodir="/opt/ogslb/proto"
/>
+ <BACKEND
+ type="redis"
+ host="localhost"
+ />
+
</OGSLB>
View
10 lib/ParseConfig.py
@@ -36,12 +36,22 @@ def getText(nodelist):
def parseConfig(filename='config.xml'):
dom = xml.dom.minidom.parse(filename);
config = {}
+ backend = {}
try:
dbc = dom.getElementsByTagName('CONFIG')[0]
for a in dbc.attributes.keys():
config[a] = dbc.attributes[a].value
except:
logger.debug("error getting config")
+ try:
+ dbc = dom.getElementsByTagName('BACKEND')[0]
+ for a in dbc.attributes.keys():
+ backend[a] = dbc.attributes[a].value
+ config['backend'] = backend
+ except:
+ logger.debug("error getting config")
+
+
return(config)
View
6 lib/Responder.py
@@ -33,12 +33,14 @@
class Responder(threading.Thread):
def __init__(self, queue, Config, threadID):
self.__queue = queue
- self.Config = Config
+ self._config = Config
self.threadName = "responder-" + str(threadID)
threading.Thread.__init__(self, name=self.threadName)
+ logger.debug("responder started: %s" % self.threadName)
+
# setup the time series connection
- self._db = TimeSeries()
+ self._db = TimeSeries(self._config)
def run(self):
View
3 lib/Stats_redis.py
@@ -30,7 +30,8 @@
# TimeSeries which is collected from responses to tests. Stats is used
# for things such as collecting info about how ogslb it's self is doing
class Stats:
- def __init__(self):
+ def __init__(self, Config):
+ self._config = Config
self._db = redis.Redis('localhost')
def sput(self, key, value):
View
3 lib/Stats_sqlite.py
@@ -33,7 +33,8 @@
# TimeSeries which is collected from responses to tests. Stats is used
# for things such as collecting info about how ogslb it's self is doing
class Stats:
- def __init__(self):
+ def __init__(self, Config):
+ self._config = Config
self._db = sqlite3.connect('/var/tmp/ogslb-stats.db')
try:
self._setupDB()
View
15 lib/TimeSeries_redis.py
@@ -20,11 +20,15 @@
import ast
from time import time
import pprint
+import logging
+
BackendType = "redis"
# setup pprint for debugging
pp = pprint.PrettyPrinter(indent=4)
+# and setup the logger
+logger = logging.getLogger("ogslb")
# the TimeSeries class is used to abstract how we deal with data
@@ -35,8 +39,15 @@
# window of time
class TimeSeries:
- def __init__(self):
- self._db = redis.Redis('localhost')
+ def __init__(self, Config):
+ host = ''
+ self._config = Config
+ try:
+ host = self._config['backend']['host']
+ except:
+ host = 'localhost'
+ logger.debug("host: %s" % host)
+ self._db = redis.Redis(host)
def zput(self, key, value, when):
self._db.zadd(key, value, when)
View
4 lib/TimeSeries_sqlite.py
@@ -40,8 +40,8 @@
# window of time
class TimeSeries:
- def __init__(self):
- logger.debug("responder startup")
+ def __init__(self, Config):
+ self._config = Config
self._db = sqlite3.connect('/var/tmp/ogslb-timeseries.db', check_same_thread = False)
try:
self._setupDB()

0 comments on commit 342e9fb

Please sign in to comment.
Something went wrong with that request. Please try again.