-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
u/log as the single logging library? #25
Comments
Hi Peter, it is certainly possible and fairly easy to do both: to send traditional logs to µ/log, or to send µ/log events to traditional logging systems like Log4j/Logback/Timbre etc. Obviously, there is no point to do both at the same time otherwise you could create a spiralling loop between the two loggers. I thought long about the possibility to include an appender for traditional log systems or a backend for SLF4J to publish existing logs into µ/log and I'm not too sure that there is value in doing so. The fundamental problem of decoding the message into data-points that a machine can aggregate is the job that tools like Logstash and Splunk perform with various degree of efficiency and error. Instead, if you take the humans out of the picture, you should be able to quantitatively analyze and react to telemetry events without the need for decoding the Certainly, µ/log can do the heavy lifting of collecting and shipping your traditional logs to ElasticSearch directly, but you still need to decode the To make a (contrived) example is like the people who migrate their datacenters to the cloud by simply lifting off the existing software and machines and expecting to be cheaper and faster. To really leverage the cloud you should design for the cloud. Similarly, to really leverage µ/log you should really design for data-first telemetry. I personally keep the two systems separate as I don't want to pollute the good quality telemetry that I get from µ/log instrumentation with the flood and meaningless and had to analyse log I personally use this ElasticSearch appenders, and index the data separately. However, if you want to write an appender that takes traditional logs and add them into µ/log, you could do this with two different approaches:
Either way, if you need to extract/decode the message and produce more meaningful data-events the approach I would take it would be to send the traditional logging to µ/log, the configure µ/log to send the logs to Kafka or Kinesis, have a streaming application that decodes the messages into datapoints and the index the data-datapoints into ElasticSearch
Whether you choose to keep them separate and use the Lagback appender listed above, or you choose to unify them and write your own appender I would be interested in hearing your feedback on how it is working for you, and I would be glad to help. |
Hi Bruno, I haven't forgotten about this, just too much work on my plate at this moment to work on this. It's on my todo list though :) What I'd boil down your answer to:
It's the second point that has no correct answer - for some users it might provide benefits, even if it would be considered an anti-pattern. For others it would just mess up their data. In Elasticsearch it's relatively easy to find anything with a good query, but that's the human, non-scripted way. But once you find those queries by hand you can reuse them unless the log message changes. If there was an SLF4J bridge I'd just try that out and see for myself. This way I'd need to implement it, for which I'd need to get acquainted with the SLF4J interfaces. Once the bridge is done it could be useful for others as well, though. I'll see if I find some time to tinker with it. Otherwise I'll just keep the setup as-is and try to slowly move my stuff into mulog. |
Hi Peter, the value proposition of µ/log is to provide telemetry can be queried and extract values in ways other tools won't.
Elasticsearch is an amazing tool to slice and dice the data the way you need. Combined with the richness of the µ/log data is hardly comparable to the blunt log messages without contextual information (see examples here the don't mean anything) I can understand that you can still draw benefit from old logging for a variety of reasons, I would say let's try to implement it and see what you get out of it. I would suggest to use a concrete logger rather than the facade (SLF4j). With SLF4j is more complicated because you can't have more than 1 bridge active at the same time. So you will need to commit to your new implementation all at once. I think is riskier. |
Thanks. I understand the value proposition as I already partially implemented this on top of timbre. My logs are written as JSON lines with additional keys so I was already able to to reason about my logs better. The rest of the services are usually logging rich text which is parsed in logstash into structured data. Some services are planning to log JSON lines as well. mulog's way is far simpler and richer though, and a natural extension to what I was already bolting on top of timbre. Thanks for the log4j/logback tip, I'll see what to pick then. |
Great, if you use Timbre adding µ/log appender is a "one-liner". here an example (ns timbre-mulog.core
(:require [taoensso.timbre :as timbre]
[com.brunobonacci.mulog :as u]
[clojure.string :as str]))
;; init μ/log global context (maybe add hostname and pid)
(u/set-global-context! {:app-name "myapp" :version "0.2.0" :env "local"})
;; init μ/log publisher
(u/start-publisher! {:type :console :pretty? true})
;; init timbre
(timbre/set-config!
{:level :debug
:appenders
{:println (timbre/println-appender {:stream :auto})
;; example μ/log appender
:mulog {:enabled? true
:fn (fn [{:keys [level ?ns-str vargs] :as d}]
(u/log ::log-event :level level :mulog/namespace ?ns-str :message (str/join " " vargs)))}
}})
;; now when you use timbre log
(timbre/info "Test" "message")
;; the console output will look like this
20-07-09 12:49:43 host.lan INFO [timbre-mulog.core:24] - Test message
{:mulog/event-name :timbre-mulog.core/log-event,
:mulog/timestamp 1594298983538,
:mulog/trace-id #mulog/flake "4X-LTcCx50c3x2YeJCNK6zMXnRLetnS9",
:mulog/namespace "timbre-mulog.core",
:app-name "myapp",
:env "local",
:level :info,
:message "Test message",
:version "0.2.0"} |
Yeah, I wanted to get rid of timbre if I'm to use mulog to get rid of yet another layer. It is getting a bit off topic so just briefly - mssql jdbc driver uses java.util.logging and I was unable to force it to log on debug level. The j.u.l -> slf4j -> timbre path just somehow didn't work for me. One needs a master's degree in java logging to get these things going :) |
I see, so probably the easiest is to clone slf4j-timbre and replace timbre with µ/log |
I will close this issue for now, If you need help with the SLF4J backend feel free to open another issue. |
Sure thing. I am preoccupied with other, more burning features right now, but I already chose some lower priority ones to solve with mulog so I'll definitely start using it. And once I start using it I'll revisit this single-logging-library conundrum |
Hi Bruno, since I wanted to get a little bit more familiar with SLF4J anyway (since it takes a master's degree to log in java) I found this to be a good exercise. https://gitlab.com/nonseldiha/slf4j-mulog clj -Sdeps '{:deps {com.brunobonacci/mulog {:mvn/version "0.3.1"} nonseldiha/slf4j-mulog {:mvn/version "0.2.0"}}}'
(require '[com.brunobonacci.mulog :as u])
(u/start-publisher! {:type :console})
(def logger (org.slf4j.LoggerFactory/getLogger "TEST"))
(.info logger "hi")
=> {:mulog/trace-id #mulog/flake "4XMgpDmW0Dwjke6MNF2kLya4OAFAxPoh", :mulog/timestamp 1595942042220, :mulog/event-name :TEST/log, :mulog/namespace "org.slf4j.impl.mulog", :log/message "hi", :log/level "info"} There's a clj namespace (org.slf4j.impl.mulog/set-level! :debug) ; now logger.debug gets logged too Default is Please let me know your thoughts, especially on the default info level and the keywords in the final log event ( |
Hi @xificurC, That's awesome!!! as Regarding the (org.slf4j.impl.mulog/set-level! :debug) ; it makes sense to have it there too as some logging could be very noisy, but remember you can always filter it in (u/start-publisher!
{:type :console
;; (f [events]) -> events
:transform (partial filter (where :log/level :in? [:warning :error :critical]))}) this example uses where library, but you can write any when you have it ready let me know and I will add a link to it from |
Hi Bruno,
The rest I understand and will incorporate. Thank you for your feedback! |
Yes you can overwrite it! |
good :) I'll probably follow up tomorrow then! |
I incorporated the changes in 0.2.1 and will try to whip up the readme tomorrow. Will let you know. |
Hi Bruno, I finished the first readme, here's the link again - https://gitlab.com/nonseldiha/slf4j-mulog Awaiting your review |
Hi @xificurC, that's great. If I can make an observation I would put the section named "So what should I do?" after you showed them how to use it, but your point is spot-on with the whole idea of µ/log. Finally, although SLF4j has the ability to support MDC, clojure.logging doesn't, however, if you are using the I'm adding a section in the readme for your |
Hi Bruno,
done.
I was thinking of the other way around actually. If someone's been using MDC or Markers in their java codebase and they'd like these concepts to propagate to mulog. Maybe that's the minority of the minority though :)
Thank you for your time, feedback and mention |
Hi Bruno,
thanks for this inspiring take on logging and observability in general!
I've read the readme, the internals and a couple of discussions in the issues, as well as searched for SLF4J, before creating this issue.
u/log is a new take on logging, so there is naturally a gap between what the current JVM ecosystem uses and what you created. Are you typically bridging this gap? I don't see an SLF4J adapter anywhere in the repo.
To talk about a more concrete example - the services at my work are configured to log into files and filebeat is crawling them and publishing the log lines to logstash. There's a couple services that actually log JSON lines and simplify the process of giving these lines structure, but the pattern remains. If I am already logging structurally I can skip the hops and write to elasticsearch directly through u/log. This is simple to do since everything is ready in this repo. However what about the other libraries' logs? I'd like to keep them since e.g. errors from connection pools are important to see. But I'd like to drop the filebeat->logstash->elasticsearch pipeline and simplify the whole thing if I am to use u/log instead of having 2 ways to publish the logs.
So the question is - do you typically bridge these 2 worlds or do you keep them separate? Is there a SLF4J adapter that sends SLF4J logs as u/log logs? A simple converter like
can be a reasonable start. I've read your rationale from #17 but I'd expect bridging some of the information over in some way would still be better than nothing. As I noted though you might already have a workflow and an explanation of that workflow might be beneficial to new users, even in the main readme I guess.
Thanks again and waiting on your input.
The text was updated successfully, but these errors were encountered: