Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Logging Improvements (v3.3.0)

* Finally figured out the right way to display errors nicely (`sys.excepthook`)
  and added it. It can be turned off on the `BackEnd()` class using the
  `tracebacks` argument.
* Improved the look of the docs (README.md, docs/*.md)
  • Loading branch information...
commit 78aa3c98359d2dbf04143d091cb772807b610e19 1 parent 6a91dcd
Daniel Foerster authored
28 README.md
View
@@ -15,18 +15,18 @@ The basic same-process task module, the logging module, and the multi-purpose
resyncronization module fall into this category, as well as the threaded
module.
-simple.py is just that -- a simple task creator. A task can be run in three
+*simple.py* is just that -- a simple task creator. A task can be run in three
ways: waiting, callbacks, or ignored.
-log.py provides the refreshingly simple logging mechanism for TaskIt, with
+*log.py* provides the refreshingly simple logging mechanism for TaskIt, with
splitters, file-like interfaces, and an interface to file-like objects.
-resync.py provides a novel way to get the best of both the synchronous and
+*resync.py* provides a novel way to get the best of both the synchronous and
asynchronous worlds, with a simple yet powerful API allowing things such as a
basic producer-consumer model, handing off the results of a callback to another
function, and more.
-threaded.py centralizes the imports (currently three) from thread/_thread.
+*threaded.py* centralizes the imports (currently three) from thread/_thread.
Distributed modules
-------------------
@@ -37,14 +37,14 @@ without obfuscating the transport mechanism. By default, the transport
mechanism uses standard JSON, but a pickle codec is also available, and writing
custom codecs is quite simple.
-common.py provides common constants, functions, and classes.
+*common.py* provides common constants, functions, and classes.
-backend.py is the backend of the distributed task processing model. It provides
-DTPM server writers with the ability to use almost any function without
-modification, and gives allowances for special cases.
+*backend.py* is the backend of the distributed task processing model. It
+provides DTPM server writers with the ability to use almost any function
+without modification, and gives allowances for special cases.
-frontend.py is the frontend to the DTPM. The API is similar to that of
-simple.py, with the allowances of routing all calls through a FrontEnd and
+*frontend.py* is the frontend to the DTPM. The API is similar to that of
+*simple.py*, with the allowances of routing all calls through a *FrontEnd* and
using string identifiers. It provides it's own job count as well as access to
the backend's job count. It should be noted that a discrepancy between these
numbers is generally not a bad sign; client load is not necessarily the same as
@@ -67,12 +67,12 @@ Examples
--------
The examples directory contains example scripts for each feature of TaskIt.
-* worker.py is an example for taskit.backend and must be running before main.py
+* *worker.py* is an example for taskit.backend and must be running when main.py
is run.
-* main.py is an example for taskit.frontend and requires worker.py to be run
+* *main.py* is an example for taskit.frontend and requires worker.py to be run
first.
-* resync.py demos every feature of taskit.resync.
-* simple.py displays many of the taskit.simple features.
+* *resync.py* demos every feature of taskit.resync.
+* *simple.py* displays many of the taskit.simple features.
Documentation
-------------
14 docs/protocol.md
View
@@ -13,7 +13,7 @@ Overview
The TaskIt DTPM interface is a symmetrical sandwich of four parts:
-TaskIt --> JSON --> FirstByte --> Sockets --> FirstByte --> JSON --> TaskIt.
+**TaskIt --> JSON --> FirstByte --> Sockets --> FirstByte --> JSON --> TaskIt**
At each point in the sandwich above, information is decomposed by the
communicator until it reaches the socket, where it is transported to the
@@ -25,7 +25,7 @@ Decomposition: TaskIt
Regular results are retrieved and become ['success', result] pairs. Errors are
caught and become ['error', type, args] groups.
-Example: return 50 --> ['success', 50]
+*Example: return 50 --> ['success', 50]*
Decomposition: JSON
-------------------
@@ -34,7 +34,7 @@ Already at this point, no knowledge of what is happening exists. All that is
known is that an object must be translated into a string, which is exactly what
JSON does.
-Example: ['success', 50] --> '["success", 50]'
+*Example: ['success', 50] --> '["success", 50]'*
Decomposition: FirstByte
------------------------
@@ -44,7 +44,7 @@ a preamble to announce the chunk size, and breaks the message up into articles,
each prefixed with a 1 if another article follows, or a 0 if this article is
the last. These articles are at this point chunk size or shorter.
-Example: '["success", 50]' --> send('2048'), send('0["success", 50]')
+*Example: '["success", 50]' --> send('2048'), send('0["success", 50]')*
Middleman: Sockets
------------------
@@ -60,14 +60,14 @@ Now on the communicatee side, FirstByte gets the the preamble to discover the
chunk size, and reads the articles as they come in, looking for the 0 prefix.
It then has the complete message recomposed.
-Example: recv('2048'), recv('0["success", 50]') --> '["success", 50]'
+*Example: recv('2048'), recv('0["success", 50]') --> '["success", 50]'*
Recomposition: JSON
-------------------
JSON now takes the string and retranslates it into an object.
-Example: '["success", 50]' --> ['success', 50]
+*Example: '["success", 50]' --> ['success', 50]*
Recomposition: TaskIt
---------------------
@@ -77,4 +77,4 @@ If the object is an error, TaskIt raises a single error, with the sent type and
args information. If the object is a success, TaskIt returns the result
included.
-Example: ['success', 50] --> return 50
+*Example: ['success', 50] --> return 50*
3  examples/simple.py
View
@@ -43,6 +43,9 @@ def main():
log(INFO, 'Ignoring error_time()')
error_time.ignore()
+ log(INFO, 'Using default error handling with error_time()')
+ error_time.callback(cb, None)
+
log(INFO, 'Using callback with instant_time()')
instant_time.callback(cb, error_cb)
2  taskit/__init__.py
View
@@ -23,4 +23,4 @@
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
-__version__ = '3.2.4'
+__version__ = '3.3.0'
34 taskit/backend.py
View
@@ -84,20 +84,23 @@ class BackEnd(FirstByteProtocol):
"""
def __init__(self, tasks, host='127.0.0.1', port=DEFAULT_PORT,
- logger=null_logger, codec=JSONCodec, end_resp=END_RESP):
- """
- tasks -- a dict consisting of task:callable or task:(callable,bool)
- items. The boolean, which defaults to False, determines
- whether or not the BackEnd() instance will be passed as the
- first argument to the callable. Useful for tasks needing
- backend.subtask().
- host -- The host to bind to.
- port -- The port to bind to.
- logger -- A logger supporting the taskit.log interface.
- codec -- A codec to be used in converting messages into strings.
- end_resp -- The time (in seconds) that stop_server() and main() should
- use to determine the responsiveness: it is used for socket
- timeouts and while:sleep() wait loops.
+ logger=null_logger, codec=JSONCodec, end_resp=END_RESP,
+ tracebacks=True):
+ """
+ tasks -- a dict consisting of task:callable or
+ task:(callable, bool) items. The boolean, which defaults
+ to False, determines whether or not the BackEnd()
+ instance will be passed as the first argument to the
+ callable. Useful for tasks needing backend.subtask().
+ host -- The host to bind to.
+ port -- The port to bind to.
+ logger -- A logger supporting the taskit.log interface.
+ codec -- A codec to be used in converting messages into strings.
+ end_resp -- The time (in seconds) that stop_server() and main()
+ should use to determine the responsiveness: it is used
+ for socket timeouts and while x:sleep() wait loops.
+ tracebacks -- Whether or not the server should output task tracebacks
+ (in the same way that they would be if not caught).
"""
FirstByteProtocol.__init__(self, logger)
@@ -105,6 +108,7 @@ def __init__(self, tasks, host='127.0.0.1', port=DEFAULT_PORT,
self.host = host
self.port = port
self.codec = codec
+ self.trackbacks = tracebacks
self.task_count = 0
# Is this necessary to avoid problems with the task counter getting
# corrupted? That is, are self.task_count += 1 and self.task_count -= 1
@@ -147,6 +151,8 @@ def _handler(self, conn):
except Exception as e:
self.log(ERROR, 'Error while fullfilling task %r: %r' % (task, e))
res = ['error', e.__class__.__name__, e.args]
+ if self.tracebacks:
+ show_err()
else:
self.log(INFO, 'Finished fulfilling task %r' % task)
finally:
9 taskit/common.py
View
@@ -1,13 +1,14 @@
import time
import json
import pickle
+import sys
from .log import null_logger, ERROR
__all__ = ['DEFAULT_PORT', 'STOP', 'KILL', 'STATUS', 'bytes', 'basestring',
- 'FirstByteCorruptionError', 'FirstByteProtocol', 'JSONCodec',
- 'PickleCodec']
+ 'show_err', 'FirstByteCorruptionError', 'FirstByteProtocol',
+ 'JSONCodec', 'PickleCodec']
DEFAULT_PORT = 54543
@@ -27,6 +28,10 @@ def bytes(s, enc):
STATUS = '<status>'
+def show_err():
+ sys.excepthook(*sys.exc_info())
+
+
class FirstByteCorruptionError(Exception):
"""
Exception raised when the first byte of a FB LMTP message is not a 0 or 1.
1  taskit/frontend.py
View
@@ -124,6 +124,7 @@ def _do_cb(self, task, cb, error_cb, *args, **kw):
except BackendProcessingError as e:
if error_cb is None:
self.log(ERROR, e.__traceback__)
+ show_err()
elif error_cb:
error_cb(e)
else:
7 taskit/log.py
View
@@ -24,9 +24,14 @@ class OutToLog(object):
def __init__(self, log, level=INFO):
self.log = log
self.level = level
+ self.cache = ''
def write(self, s):
- self.log(self.level, s)
+ lines = s.split('\n')
+ lines[0] = self.cache + lines[0]
+ while len(lines) > 1:
+ self.log(self.level, lines.pop(0))
+ self.cache = lines.pop(0)
class OutToError(OutToLog):
8 taskit/simple.py
View
@@ -3,8 +3,10 @@
much simpler and more Pythonistic and without an event-loop.
"""
+import sys
from .threaded import *
+from .common import show_err
__all__ = ['null_cb', 'taskit', 'Task']
@@ -49,11 +51,7 @@ def _do_cb(self, cb, error_cb, *args, **kw):
res = self.work(*args, **kw)
except Exception as e:
if error_cb is None:
- # Just let it explode into the thread-space
- # TODO: This doesn't necessarily do what we want, but there is
- # no other way to do so. If all that is desired is logging,
- # error_cb should be a callable that will take care of that.
- raise
+ show_err()
elif error_cb:
error_cb(e)
else:
Please sign in to comment.
Something went wrong with that request. Please try again.