Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify servicelogger.py's search for the servicevessel #119

Open
aaaaalbert opened this issue Jul 30, 2015 · 4 comments
Open

Simplify servicelogger.py's search for the servicevessel #119

aaaaalbert opened this issue Jul 30, 2015 · 4 comments

Comments

@aaaaalbert
Copy link
Contributor

The servicelogger currently uses a rather complicated way to find out where it should put its logfiles to:

The first two items could be implemented in a more general way by just persist.restore_objecting the nodeman.cfg that plainly states the name of the servicevessel.

The theoretical downside I can see is that if the servicevessel owner decided to give up the vessel, and/or the owner of another vessel on the node transferred ownership to the servicevessel pubkey, then nodeman.cfg would not reflect the new assignement, whereas vesseldict would, and the service logs would end up in the wrong vessel. (I don't think this has ever happened though. The clearinghouse does nothing like that.)

@JustinCappos
Copy link

We keep the node manager state in memory anyways in the nmmain's module
space. So can't we just read it from there?

On Thu, Jul 30, 2015 at 10:45 AM, aaaaalbert notifications@github.com
wrote:

The servicelogger currently uses a rather complicated way to find out
where it should put its logfiles to:

The first two items could be implemented in a more general way by just
persist.restore_objecting the nodeman.cfg that plainly states the name of
the servicevessel.

The theoretical downside I can see is that if the servicevessel owner
decided to give up the vessel, and/or the owner of another vessel on the
node transferred ownership to the servicevessel pubkey, then nodeman.cfg
would not reflect the new assignement, whereas vesseldict would, and the
service logs would end up in the wrong vessel. (I don't think this has ever
happened though. The clearinghouse does nothing like that.)


Reply to this email directly or view it on GitHub
#119.

@aaaaalbert
Copy link
Contributor Author

Yes, we can do that for nodemanager's use of servicelogger (assuming we refactor the init functions a bit).

A bigger problem exists in repy.py's use of it (via tracebackrepy). repy.py doesn't load the NM config, yet some internal Repy errors could get logged in the nodemanager log, which is why servicelogger contains the logic to find the service vessel.

I think that the Repy sandbox should not take care of setting up log files itself, but rather use standard streams, and have the nodemanager redirect them to the appropriate places. I'll open another issue for this.

@JustinCappos
Copy link

What if this runs without a node manager? For example, as a standalone
Repy program.

On Tue, Aug 11, 2015 at 10:10 AM, aaaaalbert notifications@github.com
wrote:

Yes, we can do that for nodemanager's use of servicelogger (assuming we
refactor the init functions a bit).

A bigger problem exists in repy.py's use of it (via tracebackrepy).
repy.py doesn't load the NM config, yet some internal Repy errors could
get logged in the nodemanager log, which is why servicelogger contains
the logic to find the service vessel.

I think that the Repy sandbox should not take care of setting up log files
itself, but rather use standard streams, and have the nodemanager redirect
them to the appropriate places. I'll open another issue for this.


Reply to this email directly or view it on GitHub
#119 (comment)
.

@aaaaalbert
Copy link
Contributor Author

In the usual running-local case, python repy.py restrictionsfile my_program.r2py, the command-line arguments to use servicelogger aren't given. Thus, logging goes to stdout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants