Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

chore(logger): refactor-- also includes new features #4502

Merged
merged 2 commits into from Oct 5, 2015
Merged

chore(logger): refactor-- also includes new features #4502

merged 2 commits into from Oct 5, 2015

Conversation

krancour
Copy link
Contributor

Fixes #4000; replaces #4435

TL;DR: A complete rewrite of deis-logger aimed at easier extensibility and maintenance. Some bug fixes and new features are included as well, as are many new unit tests.

Motivations

  • Stateless logger #4000 called for the implementation of an in-memory ring buffer as an alternative to file-based log storage (which doesn't work out so well for those running stateless / without Ceph).
  • The possibility that in-memory storage is used as an alternative to file-based storage means deis-controller can no longer mount a volume in common with deis-logger to get logs. deis-logger needed to evolve to serve logs to deis-controller over HTTP.
  • deis-logger was not originally written to read logs; only to write them.
  • The implementation offered by the community in feat(logger) Stateless logger implementation through in memory ring buffer #4083 was welcome, but fell short in a few keys areas:
    • Introduced the concept of a "handler" for storing and reading logs, but clashed with deis-logger's existing notion of what a "handler" is. (Overloaded term.)
    • Introduced some undesired complexity-- e.g. deis-controller only read logs from deis-logger over HTTP if the in-memory storage option was in use. (deis-controller shouldn't have to know how logs are being stored and shouldn't have to adjust its behavior based on that. Logs could/should always be retrieved from the deis-logger over HTTP.)
    • Only checked the "handler" type to use at startup; couldn't dynamically reconfigure itself.
    • Didn't adequately address failure cases.
  • Despite those few problems, feat(logger) Stateless logger implementation through in memory ring buffer #4083 actually worked great! Big shout out to @rvadim and @aj-may for their efforts on this. Unfortunately, since the implementation was layered on top of existing code that was in rough shape to begin with, this didn't bode well for future maintenance and extension. deis-logger as it stood prior to this PR:
    • Was based on a library that hasn't been updated in three years.
    • Had lots of dead, untested, and hard-to-follow code.
    • Existing support for writing logs to files and draining logs was a hack to begin with.
    • Only supported UDP-based drains. (TCP needed for use with TLS-- future feature.)

Issues such as #4280 are also considerations that informed the approach.

Weighing the myriad concerns, a scorched earth re-design seemed the way to go.

Design

Here's a brief overview of the design. (I'll see about adding a use case diagram tomorrow to help make this easier for reviewers.)

Logger

main.go handles startup including flag parsing. It supports all the same flags as the old implementation for complete backwards compatibility. It initializes and starts four components:

  1. A syslogish.Server: reads and writes logs. For writing, it implements a loop that receives logs (1 per packet) over UDP. A non-blocking write immediately puts these into an internal queue. This is exactly the same as before. Another loop ranges over the queue and processes log messages. It delegates their storage to an underlying storage.Adapter (file-based or in-memory) and delegates draining to an underlying drain.LogDrain (UDP-based or TCP-based 🆕; SSL not supported yet). For reading, reads are also delegated to the underlying storage.Adapter. A key improvement is that storage write failures (rare) and failures to dial the drain (could happen if Papertrail went down, for instance) are handled silently. This sounds bad, but it helps avoid a logging death spiral that happens when the logger can't handle its own error messages. (All other failures that won't start the death spiral are logged, of course.)
  2. A weblog.Server: handles HTTP GET and DELETE requests. Delegates all actual work to the syslogish.Server.
  3. A Configurer: uses a ticker and watches etcd for changes to /deis/logs/storageAdapterType and /deis/logs/drain. When changes are detected, it uses factories that take the values of those keys to construct appropriate storage.Adapter and drain.LogDrain instances, respectively. These are then injected into the syslogish.Server.
  4. A Publisher: uses a ticker and periodically writes the logger's host and port to etcd using the /deis/logs/host and /deis/logs/port, respectively.

Apart from these four primary components, there are also two implementations each of storage.Adapter and drain.LogDrain. These are unit tested rather thoroughly. (Which is the reason this PR is a net addition of lines to the code base.)

One other notable improvement is that complex cleanup logic has been removed. It is not needed AFAICT.

The Fleet unit for deis-logger no longer Requires deis-store-volume, but merely Wants it. This allows it to start up in the absence of Ceph.

Controller

The modifications to deis-controller are more modest. confd is used to look up the deis-logger host now. Combining that with a hard-coded port 8088 permits discovery of the deis-logger component's new weblog.Server. The RESTful interface it exposes allows the controller to read and destroy logs. Failure cases are all accounted for and tests have been added to simulate the failure scenarios and assert that they are handled properly.

The Fleet unit for deis-controller no longer expresses a dependency on deis-store-volume, nor does it mount the volumes from that container on the deis-controller container. (Let's hear it for decoupling.)

deisctl

deisctl has been modified to now install deis-logger with the stateless platform (previously omitted) and to also start that component during stateless platform start or restart.

@krancour
Copy link
Contributor Author

@rvadim and @aj-may, I would highly value your feedback on this.

@krancour krancour changed the title chore(logger): complete rewrite-- also includes new features chore(logger): refactor-- also includes new features Sep 17, 2015
@krancour
Copy link
Contributor Author

refactored logger

@carmstrong
Copy link
Contributor

💯 on this write-up

@carmstrong
Copy link
Contributor

@krancour Unless I'm mistaken, this also means we can drop deis-store-volume and store-metadata since we're not using them anymore! We'll need to figure out an upgrade path, though.

@krancour
Copy link
Contributor Author

@carmstrong logger will still use deis-store-volume if using file-based log storage.

@carmstrong
Copy link
Contributor

logger will still use deis-store-volume if using file-based log storage.

Ah, gotcha. Ok. We can remove that for Deis v2. 👍

c.manageStorageAdapter()
c.manageDrain()
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you stop a Configurer?

On a side note, I love the name :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't stop it. Once it starts, it runs until the program terminates.

@kalbasit
Copy link
Contributor

I haven't had time to review this in full, I'll finish my review tonight.

@krancour
Copy link
Contributor Author

@kalbasit thanks! I appreciate having your eyes on this. You're already asking some great questions. Feel free to fight me on any of it if I am wrong. 😄

@benwilber
Copy link
Contributor

This looks really cool! I don't have a ton of visibility into the deis logger subsystem (I'm still new to this project), but I'm wondering if there was any consideration to basing the syslog messaging components on a dedicated syslog server, something like syslog-ng? I'm sure there are good reasons for not using syslog-ng as the logging server (in the same way that nginx is used as the routing layer), but since the deis logger appears to reimplement a number of the core features of syslog-ng, I'm wondering what those reasons are. This is just for my own education on the architecture/design decisions.

@krancour
Copy link
Contributor Author

@benwilber I wasn't around when the decision was initially made, but I'd wager a guess that it had a lot to do with how easy it was to hack on this exising go-based syslog server to add in features like custom message parsing and publishing of logger host:port so that all the logspouts can discover it.

Fast forward to the current day and maintaining backwards compatibility (supporting all the same command line flags, env vars, and etcd keys that we have historically) is also a concern. Since semantic versioning allows us to not be concerned with that WRT the upcoming v2 (which is only in the beginning stages of development), feel free to suggest syslog-ng as an alternative implementation for this component.

@krancour
Copy link
Contributor Author

@kalbasit I've fixed the possible race conditions you pointed out. Thanks again! I'd appreciate another spot-check if you've got the time.

@krancour
Copy link
Contributor Author

The problem that was interfering with controller's functional tests should be fixed now.

path = os.path.join(settings.DEIS_LOG_DIR, self.id + '.log')
if not os.path.exists(path):
try:
url = "http://{}:{}/{}?log_lines={}".format(settings.LOGGER_HOST, 8088, self.id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we turn 8088 into a constant?

# the overall success of deleting an application, but we should log it.
err = 'Error deleting existing application logs: {}'.format(e)
log_event(self, err, logging.WARNING)
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass is unecessary here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. Removing.

@krancour
Copy link
Contributor Author

krancour commented Oct 1, 2015

@technosophos, thanks for the feedback. I'll respond or apply suggestions before morning.

if err != nil {
return nil, err
}
return &Server{conn: c, queue: make(chan string, 5)}, nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest buffering this at closer to 1000. 5 isn't gonna buy you anything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is an interesting thing... while I've refactored most of the code for easier maintenance, some of the mechanics in terms of how this component actually works are inspired by / true to the original implementation. The queue depth of 5 came straight out of there.

That being said, I do not object to increasing it.

@technosophos
Copy link
Member

Alright... one last question, then I'll stop bugging you. As I went through the code in detail, and then re-read your design, I began to wonder why we only allow one adapter, and why we make drain a separate thing. A drain seems to be adaptable, and we could conceivably want multiple adapters. Would it make sense to change that part to allow a list of adapters, and send the same message to each adapter? In that model, would it make sense to treat a drain as an adapter?

@krancour
Copy link
Contributor Author

krancour commented Oct 2, 2015

I'll stop bugging you

@technosophos, you're not bugging me. If anything, I appreciate all the nice catches you made. There's a lot of concurrency going on in here and I think I handled most of it pretty well, but I'm nothing but appreciative for being called out on the spots I missed. 😄

As to the rest of your question... it's a good one. There is a certain commonality between adapters and drains. Aren't they are both just "handling" a message after all? I actually thought about that pretty early on in the refactor, but deliberately decided against it. Here's my justification:

To start, the interfaces for storage adapters and drains shouldn't actually be the same. A component handling storage inherently must be able to carry out a very important operation that a drain cannot-- retrieving what's been stored. So, I believe a storage adapter is its own distinct and specialized thing.

If not for the above we might conclude that storage adapters and drains are just different sorts of "handlers." Then it would make a great deal of sense to manage a configurable chain of handlers-- all of which can do something with a log message before passing that message on to the next handler in line. The first would store, the second would drain...

But if we accept my earlier justification for storage adapters and drains being two distinct things, then I ask myself why we'd ever need to concurrently use more than one storage adapter or more than one drain. What would it even mean, for instance, to use more than one storage adapter? What would be the point of storing log messages on disk and in memory. When they're retrieved, which of those two would we retrieve them from? It doesn't make too much sense. I guess a stronger argument could be made for using multiple drains (even though that's probably an edge case), but configuring a chain of drains would mean some significant and possibly breaking changes to how drains are configured in etcd. Remember that even though my implementation is new, drains have been around in the old logger component for quite some time and we're obligated to keep the refactored logger backwards compatible with the old logger drain configuration.

With all of the above in mind, I was ultimately content with my conclusion that storage adapters and drains are two distinct sorts of things and multiple implementations of each are possible, but there's never a need to chain storage adapters and rarely, if ever, a need to chain drains-- and that's probably too much of an edge case to justify the complexity of making that possible while also maintaining backwards compatibility with the old, single drain configuration option.

Does any of that make sense? I'm not sure. It's late. 😫

@krancour
Copy link
Contributor Author

krancour commented Oct 2, 2015

@mboersma and @technosophos, all your feedback has either been acted upon or responded to. You might need to expand the outdated diffs to see some of my responses. Feel free to LGTM if you are satisfied with where this stands now, but for my own part this latest crop of changes has me wanting to manually test-drive this one more time in addition to the CI and LGTMs. I'll get that done first thing tomorrow.

@technosophos
Copy link
Member

With all of the concurrency fixes, LGTM.

I think we could talk architecture till the bikeshed gets painted. The bottom line to me is that the present architecture is a HUGE improvement, and definitely gives us flexibility if we want to, say, streamline write operations in the future. So I don't feel like I should belabor that discussion any more. This is a huge bundle of great work. Thanks to @krancour @aj-may and @rvadim

@krancour
Copy link
Contributor Author

krancour commented Oct 2, 2015

Ok. So one issue cropped up in testing. In comparing the performance of the new logger implementation to the old, I've determined that the major bottleneck for both is the drain. In testing this, I've sent many (millions?) of message to my free Papertrail account today. At some point, it would appear they started throttling me. This actually exposed an issue...

If a write to the drain blocks until timeout (because of a problem on the other end), the logger's internal queue can get backed up. That means messages sit around waiting to be stored and drained since this happens sequentially to preserve the order of messages. If that queue fills up completely, the logger starts dropping new messages since there's no vacancy in the queue. Bottom line... if the drain starts failing, you're also not storing messages. Yikes.

(For what it's worth, the old logger implementation panics when this happens. The unit restarts the logger and you start with an empty queue again, but eventually get right back into the same scenario. Bottom line is the old implementation flaps when this happens.)

I spoke with @technosophos and we agreed that a good strategy here would be to use two separate queues for storage and drainage. Additionally, since the drainage queue is the one that's more likely to back up-- especially if there are problems on the other end of the drain, we can make the timeouts on the drain writes inversely proportionate to the queue's current depth... is your queue nearly empty? Take your time sending the message. Is the queue nearly full or overflowing? Move things along faster-- even if it means some writes fail. This way newly arriving messages aren't as likely to get dropped due to a queue full of messages that probably can't be drained anyway.

@krancour
Copy link
Contributor Author

krancour commented Oct 2, 2015

@technosophos in case it interests you, I think I've determined that Papertrail was not actually rate limiting me. I added some more debug logging and found that the backpressure during drainage is coming from timeouts on DNS lookups that occur during dialing.

So I think it's Comcast DNS that is rate limiting me!

@krancour
Copy link
Contributor Author

krancour commented Oct 2, 2015

But fwiw, I think the strategy we discussed is still required and valid. So in the case of the UDP drain, I can add a timeout on the dialing that's inversely proportionate to the queue depth and we'll see how that helps. Haven't started thinking yet about how the TCP drain needs to be modified. There we probably need to worry about the timeout on the dialing and the write.

@krancour
Copy link
Contributor Author

krancour commented Oct 3, 2015

the strategy we discussed is still required and valid

Strike that. It doesn't work out very well. Nevermind whether the timeouts and/or failures have to do with DNS or actually writing to an external log service. If attempts to drain messages are failing or timing out for any reason, our goal was to more aggressively manage the queue depth by spending progressively less time attempting to drain each message as the queue starts to back up. The problem with this, however, is that spending less time on each means we're actually increasing the tempo of actions that will probably fail and increasing the likelihood of continuing failure since something that's timing out already won't magically start succeeding with a lower timeout. The net effect is that this strategy doesn't really help us to recover at all. It can make things worse, in fact.

What can be done, however, is we can stop dialing so much. Dialing the drain's URL once per message is a pattern that was imported from the old log implementation. It has the benefit of catching DNS changes quickly if they should occur, but other than that, constantly closing and re-opening connections is really inefficient.

I'm going to update the drains so that they reuse each connection 100 times before dialing again. This is vastly more efficient, but reusing a connection only 100 times means we're re-dialing frequently enough that if DNS changes, it's never too long before we're dialing the updated address. I've already tried this with the UDP drain and it's working nicely. TCP might be a little harder, but I'm working on it.

@krancour
Copy link
Contributor Author

krancour commented Oct 3, 2015

@technosophos this is ready for your eyes again if you've got a few minutes at some point this weekend.

There's no timeout management or queue depth management in here. I've just split the one queue into two separate queues like we discussed so storing the next message doesn't need to wait for draining the previous message. And I improved the drains themselves by dialing less frequently-- which seems to have been a major source of problems.

There is plenty of room to further optimize this later in follow-up PRs if needed, but as is, this is handling an insane amount of load with no problems-- somewhere in the neighborhood of 10,000 log messages a minute.

@krancour
Copy link
Contributor Author

krancour commented Oct 3, 2015

I also left the latest changes unsquashed (for now) so you can more easily see the relevant diffs.

@carmstrong
Copy link
Contributor

There is plenty of room to further optimize this later in follow-up PRs if needed, but as is, this is handling an insane amount of load with no problems-- somewhere in the neighborhood of 10,000 log messages a minute.

👍 Sounds perfectly acceptable to me!

@krancour
Copy link
Contributor Author

krancour commented Oct 4, 2015

@carmstrong @technosophos the only remaining thing I might be worried about is in the TCP drain, the strategy of reusing each connection 100 times could backfire if a connection sits idle for too long. I can't seem to find anything in go that allows us to check the connection state tho.

@carmstrong
Copy link
Contributor

@krancour Unless @technosophos has any objections, I think we can 🚢 this as is. It's a substantial improvement which we can always iterate on.

@technosophos
Copy link
Member

@krancour Yeah, this looks good as-is. I see what you mean about the TCP connection re-use, but I don't think I'd worry about that just yet.

krancour added a commit that referenced this pull request Oct 5, 2015
chore(logger): refactor-- also includes new features
@krancour krancour merged commit 2333ccd into deis:master Oct 5, 2015
@mboersma mboersma added this to the v1.11 milestone Oct 8, 2015
@olalonde
Copy link
Contributor

Is there any doc somewhere to enable the in memory logger in a stateless install? The docs mention:

In a Ceph-less clutser, the Logger component should be configured, instead, to use in-memory log storage.

but it doesn't explicitly say how to do that... I suppose it's something like:

deisctl config:set storageAdapterType=memory

@olalonde
Copy link
Contributor

Oops, nevermind: http://docs.deis.io/en/latest/managing_deis/running-deis-without-ceph/#configure-logger

Should have kept reading.

@bacongobbler
Copy link
Member

@olalonde it's a little farther down the doc :)

@krancour krancour deleted the logger-rewrite branch October 31, 2015 17:25
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Stateless logger
9 participants