Skip to content

Conversation

@glbrntt
Copy link
Collaborator

@glbrntt glbrntt commented Jul 16, 2019

Motivation:

Logging is a hugely valuable tool for debugging, we currently
have no logging whatsoever.

Modifications:

  • Add swift-log as a dependency
  • Add (some) logging to the client

Result:

  • More visibility into what the lifecycle of the application

glbrntt added 3 commits July 16, 2019 10:13
Motivation:

Logging is a hugely valuable tool for debugging, we currently
have no logging whatsoever.

Modifications:

- Add swift-log as a dependency
- Add (some) logging to the client

Result:

- More visibility into what the lifecycle of the application
@weissi
Copy link
Contributor

weissi commented Jul 16, 2019

This looks really great @glbrntt! Thank you

Copy link
Collaborator

@MrMage MrMage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I am not very familiar with swift-log. Some overall questions:

  1. How can users change the "output target" of the logger, e.g. if logs should be written to a specific file? I've noticed some internal Loggers in the library; therefore users could not change their handler property. Also, having to update the handler of like 10 different logger instances would be cumbersome.
  2. How can users change the log level?
  3. Have you ensured that all logger uses provide as much context as possible? E.g. BaseClientCall takes in a Logger instance without adding extra "this log message was produced somewhere inside BaseClientCall" metadata.
  4. Could you provide an example of the log output for a typical run?

self.status = self.responseHandler.statusPromise.futureResult

self.streamPromise.futureResult.whenFailure { error in
self.logger.error("failed to create http/2 stream: \(error)")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the error parameter be part of the metadata? (I'm unfamiliar with best practices here.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I'm not super familiar with best practices here either!

From the docs:

Logging metadata is metadata that can be attached to loggers to add information that is crucial when debugging a problem

It seems like it could be either: I suppose adding it to metadata would make it easier for handlers to extract the description of the actual Error the message is related to which could be useful for e.g. emitting metrics.

@weissi do you have any insight here on when to when use metadata vs. include it in the log message? Things like identity (request Id, connection Id) seem obvious since they're useful for separating out instances. Things like errors are less clear.

@weissi
Copy link
Contributor

weissi commented Jul 16, 2019

Sorry, I am not very familiar with swift-log. Some overall questions:

  1. How can users change the "output target" of the logger, e.g. if logs should be written to a specific file? I've noticed some internal Loggers in the library; therefore users could not change their handler property. Also, having to update the handler of like 10 different logger instances would be cumbersome.

swift-log is an API-only package. The application is required to select exactly one logging backend by bootstrapping it in main.swift or so. For an example see here in the docs. All Logger instances will use the globally configured backend automatically. If for whatever reason you want to override what the application does, you can specify a logging backend when creating a Logger, but you should very rarely have to do that because usually the application author should control the logging destination.

  1. How can users change the log level?

Logger's log level as well as metadata behave like values on the logger. So changing the logging level for a given Logger is quite straightforward:

var newLogger = existingLogger
newLogger.logLevel = .error

The configure logging backend may or may not specify some more global policy around the log level.

  1. Have you ensured that all logger uses provide as much context as possible? E.g. BaseClientCall takes in a Logger instance without adding extra "this log message was produced somewhere inside BaseClientCall" metadata.
  2. Could you provide an example of the log output for a typical run?

I'll leave those for George

@glbrntt
Copy link
Collaborator Author

glbrntt commented Jul 16, 2019

  1. How can users change the log level?

To expand on @weissi's answer: the application user should be able to do this when configuring the LoggingSystem since they get the label of a Logger as input to the bootstrap closure (see here).

We can also provide a way to make it easier for users to configure logging on a subsystem-basis (i.e. client call, client connection) so that they can tune the loggers they're interested in. The gRPC core lib gives you some control over this with environment variables. We can provide methods to parse whatever environment variables we decide on and for checking whether a logger is recognised as a gRPC-Swift logger should make this fairly straightforward for users.

  1. Have you ensured that all logger uses provide as much context as possible? E.g. BaseClientCall takes in a Logger instance without adding extra "this log message was produced somewhere inside BaseClientCall" metadata.

The file, function and line are all captured when you write a log message (see here). Of course whether that is emitted depends on the handler.

Thinking about it a little more it's probably worth having a UUID on the client connection since there could be many within a single application. I'll add that.

  1. Could you provide an example of the log output for a typical run?

When I update the PR I'll post debug logs from e.g. a single unary call.

In the meantime tou can see output in the build logs from the default log handler (level is info): https://travis-ci.org/grpc/grpc-swift/jobs/559374655

@MrMage
Copy link
Collaborator

MrMage commented Jul 17, 2019

swift-log is an API-only package. The application is required to select exactly one logging backend by bootstrapping it in main.swift or so. For an example see here in the docs. All Logger instances will use the globally configured backend automatically. If for whatever reason you want to override what the application does, you can specify a logging backend when creating a Logger, but you should very rarely have to do that because usually the application author should control the logging destination.

Thanks, that answers my question!

We can also provide a way to make it easier for users to configure logging on a subsystem-basis (i.e. client call, client connection) so that they can tune the loggers they're interested in. The gRPC core lib gives you some control over this with environment variables. We can provide methods to parse whatever environment variables we decide on and for checking whether a logger is recognised as a gRPC-Swift logger should make this fairly straightforward for users.

As long as the user can e.g. drop log messages below the "warning" level in their handler or global config, I think we should be fine for now. We could look into what is needed in terms of more fine-grained control later on, but I guess even then filtering by label could be sufficient.

The file, function and line are all captured when you write a log message (see here). Of course whether that is emitted depends on the handler.

Stupid me, that makes total sense!

Thinking about it a little more it's probably worth having a UUID on the client connection since there could be many within a single application. I'll add that.

I think that's a good idea.

When I update the PR I'll post debug logs from e.g. a single unary call.

In the meantime tou can see output in the build logs from the default log handler (level is info): https://travis-ci.org/grpc/grpc-swift/jobs/559374655

Thanks, looking forward to the example log. In the meantime, the CI job has lines like error: error: StreamClosed; should we avoid the extra error: in our log statements given that we already use the error log level?

@MrMage
Copy link
Collaborator

MrMage commented Jul 17, 2019

To summarize: No objections from my side, just waiting for UUIDs to get added and CI to turn green. I would also appreciate a second review from @weissi or someone else who is familiar with Swift-log best practices.

@glbrntt
Copy link
Collaborator Author

glbrntt commented Jul 17, 2019

Thanks, looking forward to the example log. In the meantime, the CI job has lines like error: error: StreamClosed; should we avoid the extra error: in our log statements given that we already use the error log level?

I think attaching errors into metadata as you mentioned is probably the best way to deal with this, it's then more obvious that the value in the metadata is just the Error.

just waiting for ... CI to turn green.

Travis killed it because of too many log lines, should be fine when disabling / changing the level to error.

@MrMage
Copy link
Collaborator

MrMage commented Jul 17, 2019

Travis killed it because of too many log lines, should be fine when disabling / changing the level to error.

Sounds good, should we do that in the tests, then? Should probably avoid increasing test runtimes too much, too?

@glbrntt
Copy link
Collaborator Author

glbrntt commented Jul 18, 2019

@MrMage forgot to post some logs yesterday, this is from a unary call:

2019-07-18T10:12:29+0100 info: making EventLoopGroup for userDefined(GRPC.NetworkImplementation.posix) network preference
2019-07-18T10:12:29+0100 debug: created MultiThreadedEventLoopGroup for userDefined(GRPC.NetworkImplementation.posix) preference
2019-07-18T10:12:29+0100 info: making server bootstrap with event loop group of type MultiThreadedEventLoopGroup
2019-07-18T10:12:29+0100 debug: Network.framework is available but the group is not typed for NIOTS, falling back to ServerBootstrap
2019-07-18T10:12:29+0100 debug: creating a ServerBootstrap
2019-07-18T10:12:29+0100 info: making EventLoopGroup for userDefined(GRPC.NetworkImplementation.posix) network preference
2019-07-18T10:12:29+0100 debug: created MultiThreadedEventLoopGroup for userDefined(GRPC.NetworkImplementation.posix) preference
2019-07-18T10:12:29+0100 info: connection-id=891F6F95-C255-4907-84C7-6E4B5C2D258A attempting to connect to hostAndPort("localhost", 54919)
2019-07-18T10:12:29+0100 info: connectivity state change: idle to connecting
2019-07-18T10:12:29+0100 info: making client bootstrap with event loop group of type MultiThreadedEventLoopGroup
2019-07-18T10:12:29+0100 debug: Network.framework is available but the group is not typed for NIOTS, falling back to ClientBootstrap
2019-07-18T10:12:29+0100 debug: creating a ClientBootstrap
2019-07-18T10:12:29+0100 info: connection-id=891F6F95-C255-4907-84C7-6E4B5C2D258A backoff exhausted, no more connection attempts will be made
2019-07-18T10:12:29+0100 info: connectivity state change: connecting to ready
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC making unary call to '/echo.Echo/Get', request type: Echo_EchoRequest, response type: Echo_EchoResponse
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC writing request head: HTTPRequestHead { method: POST, uri: "/echo.Echo/Get", version: HTTP/2.0, headers: [("content-type", "application/grpc"), ("te", "trailers"), ("user-agent", "grpc-swift-nio"), ("grpc-accept-encoding", "identity"), ("host", "localhost")] }
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC writing request head: HTTPRequestHead { method: POST, uri: "/echo.Echo/Get", version: HTTP/2.0, headers: [("content-type", "application/grpc"), ("te", "trailers"), ("user-agent", "grpc-swift-nio"), ("grpc-accept-encoding", "identity"), ("host", "localhost")] }
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC serializing and writing Echo_EchoRequest protobuf
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC writing length prefixed protobuf message
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC writing end
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC writing end
2019-07-18T10:12:29+0100 info: appending 10 bytes to buffer
2019-07-18T10:12:29+0100 debug: creating new buffer from slice
2019-07-18T10:12:29+0100 debug: read 1 byte compression flag: 0
2019-07-18T10:12:29+0100 debug: compression is not enabled for this message
2019-07-18T10:12:29+0100 debug: parse state changed from expectingCompressedFlag to expectingMessageLength
2019-07-18T10:12:29+0100 debug: read 4 byte message length: 5
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC processing response head: HTTPResponseHead { version: HTTP/2.0, status: ok, headers: [("content-type", "application/grpc")] }
2019-07-18T10:12:29+0100 debug: parse state changed from expectingMessageLength to expectingMessage(5)
2019-07-18T10:12:29+0100 debug: read 5 byte message
2019-07-18T10:12:29+0100 debug: parse state changed from expectingMessage(5) to expectingCompressedFlag
2019-07-18T10:12:29+0100 info: read length-prefixed message
2019-07-18T10:12:29+0100 info: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC using inbound compression: none
2019-07-18T10:12:29+0100 debug: no readable bytes; nilling-out buffer
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read response headers: [("content-type", "application/grpc")]
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC channel-handler=GRPCClientUnaryResponseChannelHandler received response headers: [("content-type", "application/grpc")]
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC channel-handler=GRPCClientUnaryResponseChannelHandler inbound state changed from expectingHeadersOrStatus to expectingMessageOrStatus
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read state changed from expectingHeaders to expectingBodyOrTrailers
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC appending 26 bytes to buffer
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC creating new buffer from slice
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read 1 byte compression flag: 0
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC compression is not enabled for this message
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC parse state changed from expectingCompressedFlag to expectingMessageLength
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read 4 byte message length: 21
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC parse state changed from expectingMessageLength to expectingMessage(21)
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read 21 byte message
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC parse state changed from expectingMessage(21) to expectingCompressedFlag
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read length-prefixed message
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC no readable bytes; nilling-out buffer
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read message ByteBuffer { readerIndex: 0, writerIndex: 21, readableBytes: 21, capacity: 21, slice: _ByteBufferSlice { 14..<35 }, storage: 0x0000000110003a00 (1024 bytes) }
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC deserializing 21 bytes as Echo_EchoResponse
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC response-type=Echo_EchoResponse channel-handler=GRPCClientUnaryResponseChannelHandler received response message
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC channel-handler=GRPCClientUnaryResponseChannelHandler inbound state changed from expectingMessageOrStatus to expectingStatus
2019-07-18T10:12:29+0100 debug: no buffer to read from
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC no buffer to read from
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read state changed from expectingBodyOrTrailers to expectingBodyOrTrailers
2019-07-18T10:12:29+0100 info: appending 0 bytes to buffer
2019-07-18T10:12:29+0100 debug: creating new buffer from slice
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC unmarshalling status-message: OK
2019-07-18T10:12:29+0100 debug: 1 more byte needed to read compression flag
2019-07-18T10:12:29+0100 debug: no readable bytes; nilling-out buffer
2019-07-18T10:12:29+0100 debug: channel-handler=GRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read status GRPC.GRPCStatus
2019-07-18T10:12:29+0100 info: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC channel-handler=GRPCClientUnaryResponseChannelHandler received respone status: ok
2019-07-18T10:12:29+0100 debug: channel-handler=HTTP1ToRawGRPCClientCodec request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC read state changed from expectingBodyOrTrailers to ignore
2019-07-18T10:12:29+0100 debug: request-id=479CB2C4-6E05-4A53-9921-022A7FD77BAC channel-handler=GRPCClientUnaryResponseChannelHandler inbound state changed from expectingStatus to ignore
2019-07-18T10:12:29+0100 info: connection-id=891F6F95-C255-4907-84C7-6E4B5C2D258A shutting down channel
2019-07-18T10:12:29+0100 debug: user has initiated shutdown
2019-07-18T10:12:29+0100 info: connectivity state change: ready to shutdown
2019-07-18T10:12:29+0100 info: connection-id=891F6F95-C255-4907-84C7-6E4B5C2D258A client connection shutdown successfully

@MrMage
Copy link
Collaborator

MrMage commented Jul 18, 2019

LGTM. @weissi, would you mind reviewing as well?

@weissi
Copy link
Contributor

weissi commented Jul 19, 2019

@MrMage / @glbrntt this looks good to me too!

@MrMage MrMage merged commit 252e720 into grpc:nio Jul 19, 2019
@glbrntt glbrntt deleted the swift-log branch August 5, 2020 09:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants