Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audit logging structured entries #31046

Closed
albertzaharovits opened this issue Jun 2, 2018 · 23 comments · Fixed by #31931
Closed

Audit logging structured entries #31046

albertzaharovits opened this issue Jun 2, 2018 · 23 comments · Fixed by #31931
Assignees

Comments

@albertzaharovits
Copy link
Contributor

albertzaharovits commented Jun 2, 2018

To facilitate machine readability, log entries should be structured in a dictionary-like format. Structured logging is also known as semantic or typed logging. Examples of such structured formats are: key value pairs, JSON and XML. In general, flat text entries might be more concise and trade ambiguity for human readability. Machine readable entries are more verbose and explicit, shunning ambiguity.

The status quo for the audit logging is a mixture of the templated and the structured entry type. In general, the templated entry type is a good tradeoff between user readability and the dictionary-like format. It is not ambiguous and is also more concise compared to the structured type. Here is an example entry from the current implementation:

[2018-05-30T07:11:39,605] [transport] [access_granted]  origin_type=[local_node], origin_address=[127.0.0.1], principal=[_xpack_security], roles=[superuser], action=[indices:data/read/search[free_context/scroll]]

Unfortunately, in the current implementation, entries are ambiguous because the set of allowed characters in user and role names includes '=', ' ', '[' and ']'. Fixing this would allow for unambiguous templated log entries.

Unambiguous means machine readable, i.e. you can write a regex to parse a specific field, so why bother with the verbose structured format? The answer is that the next step after parsing is indexing, and indexing requires that event fields have a name and a type. The parser of a templated entry type infers the field name and type from the token's position. But there may be several entry types with different positional parsing required, so one has to create several patterns. But what if patterns overlap (several can match the same line), or more commonly, what if parsers assign the same field name to fields with different data types. This is very possible because the code which generates entries and the code which parses and assigns type names are in different projects. For that matter, it is also relatively easy for the generating and the parsing pieces to go out of sync with each other, requiring integration tests...

Lecture over, here is the proposal:
The format of the entries should be JSON objects, UTF-8 encoded. This format is dictionary-like, unambiguous, where each field has a name and special characters in values are escaped. The new format will coexist alongside the present one, in an separate file trail.
Field names should align to the elastic common schema (ECS). ECS encourages dot notation for field names, i.e. related fields are similarly prefixed. This proposal will follow this suggestion, but note that entries will not be nested, values are not dictionaries.
The stretched goal is to have the format configurable, so even SYSLOG message format from RFC5424 might be supported. One possible avenue for this are messages in log4j 2.

This proposal is a narrowing of #8786 to the scope of the audit log.
Relates to #29881 .

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-security

@clintongormley
Copy link

When we have discussed structured logging in the past, we had agreed to go down the path of structured logging instead of JSON because it is human readable yet machine parseable, and is more succinct.

I think it'd be a shame to have to use JSON for the audit logs. I understand the issues with escaping, and I don't think we want several regexes to parse different log files. Instead I'm thinking of something like:

CPU high [cpu:90%] on [node:abcdefg][ip:1.2.3.4]

So we can extract the message plus each individual field using the same process for every log line.

@rjernst You use structured logging in a previous job - you must have faced similar problems. Any suggestions for this?

@albertzaharovits
Copy link
Contributor Author

@clintongormley in the context of my pretentious lingo about petty matters, JSON is a structured logging example.
What I think you mean is that in the past templated logging has been preferred over structured because of its conciseness and human readability.

Basically the difference between templated and structured logging is if values are accompanied by the field name.
The example you propose is a novel breed of templated where values have field names. I haven't seen this before.
What I've seen, is that entries have a "message" field that is free text to soothe the human soul.
So, using your example, this should look like this:

{ "message": "CPU is high at 90% on node abcdefg with ip 1.2.3.4",
  "cpu": "90%",
  "node": "abcdefg",
  "ip": "1.2.3.4"
}

The JSON will not be nested, so a one line entry (unformatted with spaces) should still be readable by the typical IT admin eye.

I am leaning towards the flat JSON with the text "message" field because it is the canonical way (parsers don't require Regexp), although it loses on the conciseness field. WDYT?

@pcsanwald
Copy link
Contributor

pcsanwald commented Jun 26, 2018

My personal experience here is that at my last job I used winston using the default JSON formatting feeding into papertrail. I found the lack of conciseness to be extremely annoying, and as a business we never derived any particular value from the JSON structure. that is obviously anecdotal. I set it up that way because it seemed like a good idea at the time, but I would think twice before doing it again.

Also, since it's not trivial for things like postgres to log in JSON format, anyone using an alternate or additional log aggregation service is going to likely be dealing with templated logs regardless. And in the case of audit logging, there will very often be a 3rd party aggregation service involved somehow (certainly this was the case when I worked at a large investment bank).

I'm a bit curious why we can't support both formats via config?

@clintongormley
Copy link

It's still not fantastically human readable, and we do want to optimise for readability so that sysadmins don't hate us at 3am. The most important thing for me is that we use the same style of logging across all logs generated by Elasticsearch. Think about what a stack trace would look like in JSON? Awful!

/cc @elastic/es-core-infra for opinions.

I'm a bit curious why we can't support both formats via config?

Of course this is a possibility, but it'd be much nicer to be able to satisfy both needs out of the box, so that centralised logging doesn't require config changes that then make local access unreadable.

@rjernst
Copy link
Member

rjernst commented Jun 26, 2018

@rjernst You use structured logging in a previous job - you must have faced similar problems. Any suggestions for this?

I'm torn on which is more readable, json or text. In my previous use of structured logging, it was for search request timings (among other things). It was a multi line format with an end of record marker and became hard to read because it was packed together so tightly with no extra spacing. The format would roughly look something like:

DATE
path=...
ip=...
timers=timer1:57,timer2:230
...
EOR

So there was a general set of "timers" (which could change in code based on the request/parts of the code executed), and the times were always in milliseconds. I believe there were some other generalized keys with subkeys, but the timers I think is the most illustrative for my point: having types of the values based on structure (eg all subkeys of "timers") makes the server side a lot more flexible.

Obviously this could be in json as well, it would just be more verbose. In some cases (large entries), the json is probably easier to read (if pretty printing is used). I would just make sure in this case that the superfluous formatting (ie spaces/newlines) are removed before transferring the records.

@rjernst
Copy link
Member

rjernst commented Jun 26, 2018

I should also mention my use was a dedicated log for search requests, while I think part of our dilemma is we have multiple logs for different purposes that we are trying to have a unified format for. IMO logs with general messages should look different than request logs, etc.

@jasontedor
Copy link
Member

I am concerned by the fact that this discussion is being driven on a targeted issue (structured logging for audit logging) rather than more broadly within Elasticsearch. Is there a reason that we are approaching this from this angle?

@albertzaharovits
Copy link
Contributor Author

albertzaharovits commented Jun 27, 2018

Is there a reason that we are approaching this from this angle?

I have tried to scope my work to achieve the removal of audit indexing #29881. To that end, only logfile audit has to be machine readable so that it's easy to parse it (without utilizing multiple regexp). The topic did touched matters that are not audit logfile specific, such as stacktraces.

IMO log formats can differ: request, slowlog, audit can have different formats, although I agree it would be desirable that all logs are templated or all logs are JSON-like. In the light of this, given that in the current implementation, we are mostly on the templated bandwagon with all the logs (and no JSON-like), I am now swayed to the templated proposal that @clintongormley proposed, which has names for fields, so that a single regexp is required to parse all entries, and there is no possibility for multiple ones to overlap. It is also readable (entries are phrases instead of tabular) being alike to the general log.

@clintongormley
Copy link

Is there a reason that we are approaching this from this angle?

I think the only reason is that the audit log is the first one to get an overhaul, so it serves as a test case.

I am now swayed to the templated proposal

@albertzaharovits I'm not saying we MUST go down this route, but I would appreciate at least some investigation to see if it is feasible. We still have the escaping issue to sort out, but that may be as simple as escaping : and ] to \: and :].

If this ends up proving too difficult or unusable, then I'm OK going back to the JSON route.

@albertzaharovits
Copy link
Contributor Author

Given the "Filebeat module for Elasticsearch" effort elastic/beats#5301, and specific to the audit logfile in elastic/beats#7365 , I think @ruflin and @radoondas might help weighing in trade offs from an ingest perspective.

@ruflin
Copy link
Member

ruflin commented Jul 3, 2018

From an ingest perspective I would hope that any new format does not require to use grok patterns but that we can use dissect instead: https://www.elastic.co/guide/en/logstash/current/plugins-filters-dissect.html I'm linking to the Logstash docs as they are more extensive but the dissect implementations on the Beats and Logstash side are identical.

JSON is nice from a machine perspective but also has it's downside which we see now in Kibana as we don't know in advance what fields will show up in the log line as any plugin can add it's own fields. For the unkown parts of a log line having it all in one field and let Elasticsearch tokenize it is very powerful.

@albertzaharovits
Copy link
Contributor Author

albertzaharovits commented Jul 3, 2018

I need to work on a POC, but I think we can meet all the demands, i.e. unambiguous and parsable by both man and machine. I think we can use log4j 2's PatternLayout and JSONLayout , while ES generates StringMapMessages.
The idea is that audit logs will continue to be 'templated' (but unambiguous) and will be generated by the PatternLayout. They will be parsable with a grok pattern unfortunately. Logstash's dissect requires tabular format which I don't think is suitable here, entries vary wildly, the header will be too wide, ie. unreadable IMO.
If structured entries are required there will be a commented out section in the log4j conf file, which will enable (at run time) a new appender with a JSONLayout. This will be parsable by a machine without any pattern matching.

working on a POC, will come back with details!

@ruflin
Copy link
Member

ruflin commented Jul 4, 2018

Filebeat will have to support whatever the default is. So even if we have JSON option which is off by default it will mean Filebeat has to implement and maintain the grok patterns.

One thing we started experimenting with is mixing grok and dissect. Doing all the basic work with dissect and only have a simple grok patterns for the parts which can't be covered by dissect.

Looking forward to the POC.

@albertzaharovits
Copy link
Contributor Author

albertzaharovits commented Aug 9, 2018

The structured audit log events PR #31931 is plodding forward and I can now run most tests. That PR constructs audit event entries as log4j's StringMapMessage instead of plain java String. This aims for the best flexibility in terms of the actual final format of the event, which is delegated to log4j layouts .

After some internal mulling over the most clear and extensible log event format, I wish to gather some external input on this matter.

In a few words, I think JSON structured entries for the audit log is the most concise and readable format, even considering that it is read exclusively by humans. In addition, the real value comes from easy machine ingestion. This is in contrast to the other proposed format of 'templated' entries, i.e. entries with values interspersed with text.
What follows is my reasoning.

Foreword:
This is all a matter of taste and educated opinions, there is no good and bad answer. We just have to agree on a default log line format which, by touching only log4j config files and not the code, can be changed by the administrator. Ideally opinions should be weighted by the respective operational experience, yet all input is valuable!! (i.e. the weight coefficient is very small 😁 )

I think it's best if I start with some examples of the current entry format followed by the proposed entry format. Also, it's worth keeping in mind that the audit log trail is a separate file from all the other logs.

Current audit events (with randomization):

[transport] [access_granted]       origin_type=[rest], origin_address=[127.0.0.1], principal=[_username], realm=[authRealm], roles=[fVrd], action=[_action], request=[MockMessage], opaque_id=[xUXGgzAcIa]
[transport] [anonymous_access_denied]      origin_type=[rest], origin_address=[127.0.0.1], action=[_action], request=[MockMessage], opaque_id=[xUXGgzAcIa]
[transport] [authentication_failed]        origin_type=[rest], origin_address=[127.0.0.1], principal= _principal], action=[_action], request=[MockMessage], opaque_id=[xUXGgzAcIa]
[] [rest] [realm_authentication_failed]    realm=[_realm], origin_address=[::1], principal=[_principal], uri=[_uri], opaque_id=[NiPseBvEEw], request_body=[]
[0.0.0.0:8] [] [rest] [anonymous_access_denied]    origin_address=[127.0.0.1], uri=[_uri], opaque_id=[moFIRdIcWA], request_body=[]
[transport] [anonymous_access_denied]      origin_type=[local_node], origin_address=[0.0.0.0:9], action=[_action], request=[MockMessage]
[0.0.0.0:11] [transport] [access_granted]  origin_type=[rest], origin_address=[127.0.0.1], principal=[_system], realm=[authRealm], roles=[uw], action=[internal:_action], indices=[idx1,idx2], request=[MockIndicesRequest], opaque_id=[liGwWHzFtE]
[transport] [authentication_failed]        origin_type=[rest], origin_address=[127.0.0.1], action=[_action], request=[MockIndicesRequest], opaque_id=[xUXGgzAcIa]
[transport] [access_denied]        origin_type=[rest], origin_address=[127.0.0.1], principal=[_username], realm=[authRealm], roles=[fVrd], action=[_action], request=[MockIndicesRequest], opaque_id=[xUXGgzAcIa]
[0.0.0.0:14] [rest] [authentication_success]       principal=[running as], run_by_principal=[_username], realm=[_realm], uri=[_uri], params=[{foo=bar}], request_body=[{ "key": "value" ]

Proposed audit events:

{action="_action/bar", event.action="access_denied", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:1", opaque_id="Up", origin.address="127.0.0.1", origin.type="rest" request.name="MockMessage", user.name="running_as", user.realm="lookRealm", user.roles="xusDS", user.run_by.name="_username", user.run_by.realm="authRealm"}
{event.action="connection_granted", event.category="elasticsearch-audit", event.type="ip_filter", opaque_id="Zpet", origin.address="127.0.0.1", origin.type="ip_filter", rule="allow default:accept_all", transport.profile="lwVSs"}
{action="internal:_action", event.action="access_granted", event.category="elasticsearch-audit", event.type="transport", opaque_id="dHiE", origin.address="0.0.0.0:3", origin.type="local_node", request.name="MockMessage", user.name="running_as", user.realm="lookRealm", user.roles="IShQaX", user.run_by.name="_username", user.run_by.realm="authRealm"}
{action="_action", event.action="access_granted", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:7", indices="idx1,idx2", origin.address="127.0.0.1", origin.type="rest", request.name="MockIndicesRequest", user.name="_username", user.realm="authRealm", user.roles="araSL"}
{action="_action", event.action="authentication_failed", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:9", origin.address="127.0.0.1", origin.type="rest", request.name="MockMessage", user.name="_principal"}
{action="_action", event.action="access_granted", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:9", origin.address="127.0.0.1", origin.type="rest", request.name="MockMessage", user.name="running_as", user.realm="lookRealm", user.roles="EJ", user.run_by.name="_username", user.run_by.realm="authRealm"}
{action="_action", event.action="access_denied", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:9", origin.address="127.0.0.1", origin.type="rest", request.name="MockMessage", user.name="running_as", user.realm="lookRealm", user.roles="EJ", user.run_by.name="_username", user.run_by.realm="authRealm"}
{action="_action", event.action="run_as_granted", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:9", origin.address="127.0.0.1", origin.type="rest", request.name="MockMessage", user.name="_username", user.realm="authRealm", user.roles="EJ", user.run_as.name="_username"}
{action="_action", event.action="run_as_denied", event.category="elasticsearch-audit", event.type="transport", host.ip="0.0.0.0:9", origin.address="127.0.0.1", origin.type="rest", request.name="MockMessage", user.name="_username", user.realm="authRealm", user.roles="EJ", user.run_as.name="running_as", user.run_as.realm="lookRealm"}

Some things to note about the proposed alternative:

  • all fields are named
  • names of fields are (should be) ecs compatible
  • any field with an empty value is not printed
  • entries vary wildly in the number and types of fields
  • origin.* field although redundant at times is ecs compatible and adds consistency

My argumentation rests on the opinion that templated entries (that have text interspersed between these values) do not add value and on the contrary might hurt the eye of the human administrator. For example, for the run_as_denied event; trying to put event.action="run_as_denied", event.type="transport", origin.address="127.0.0.1", origin.type="rest", user.name="_username", user.realm="authRealm", user.roles="EJ", user.run_as.name="running_as", user.run_as.realm="lookRealm" in a phrase on the lines with:

username _username from realm authRealm with roles EJ has been denied run as for the user running_as from realm lookRealm from a request over the transport channel with the remote endpoint having the address 127.0.0.1.

It is not really an improvement. Maybe, for a first time user that had not heard about run as before, it might. But this rookie user is not someone we should optimize for. This log is very verbose and consulted rarely by the naked eye. After seeing the phrase a few times, the eye instinctively jumps between the templated values, ignoring the filling words in between. Ignoring words is safe, but it can lead to jumbling entries in the head and confusion since such phrases are heterogeneous, and they might look alike one to another. I think that the human might wish to be able to use jq (equivalent of sed) than to strain his eyes over phrases, even though upon seeing an event type for the first time it might spend 10 seconds more to understand it.

I might agree that we could add another field to the JSON proposed format containing a proper phrase. But again, I don't see the value, but maybe I am too deep into this stuff and it's just my narrow vision on the subject.

I want to conclude by underscoring that this trail should be in a separate file, with different rotation and ingestion preferences. It does not contain stacktraces or suggestions that human (users or administrators) should follow.

@ruflin
Copy link
Member

ruflin commented Aug 9, 2018

There is an other argument for using json I recently stumbled over. ES is more and more often run inside Docker. A docker container normally only has two outputs: stdout and stderr. In most cases logs are consume through this outputs (yes, there are alternative options). This causes a problem for most services which have more then 1-2 different logs. Assuming in Docker all logs are sent out through stdout, having JSON makes it for log collectors like Filebeat really simple to differentiate the events for example based on an additional filed type: audit or similar and use the appropriate processors. For example: if type == "audit": remove fields x,yz.

I think there is also an opportunity on the UI side to make JSON logs more human readable friendly to get the best of both worlds.

@clintongormley
Copy link

Just had a chat with @albertzaharovits and I agree that the templated approach is a poor cousin to just using JSON in the case of the audit log. Ideally, I'd like to keep some human readability by having eg the timestamp and severity as the first two positional arguments, followed by the JSON string.

@ruflin how feasible would it be to parse a format like this in Beats?

[2018-08-08T19:20:01,251][INFO ][{"action":"_action/bar", "event.action":"access_denied", "event.category":"elasticsearch-audit", "event.type":"transport", "host.ip":"0.0.0.0:1", "opaque_id":"Up", "origin.address":"127.0.0.1", "origin.type":"rest" "request.name":"MockMessage", "user.name":"running_as", "user.realm":"lookRealm", "user.roles":"xusDS", "user.run_by.name":"_username", "user.run_by.realm":"authRealm"}]

@ruflin
Copy link
Member

ruflin commented Aug 9, 2018

This should definitively be doable. Haven't tested it but I think a processor would look as following on the Beats side:

processors:
- dissect:
    tokenizer: "[%{@timestamp}][%{log.level}"][%{json.data}"]
    field: "message"
    target_prefix: ""
 - decode_json_fields:
    fields: "json.data"
    target: "elasticsearch.audit"

Note: Are the [] brackets for the last part needed? Was just thinking that things could become more complicated if there is a ] inside the json somewhere.

@tvernum
Copy link
Contributor

tvernum commented Aug 10, 2018

If we take the JSON path, I think it would be helpful (if possible) to make sure the keys are ordered in a logical way.
To my thinking that would be things like event.action and user.name being early in the message and things like host.ip and event.category being at the end.
I'm sure there's a bunch of different arguments that could be made, and I don't want to bike-shed it, but trying to make it readable from left-to-right would be very helpful and random order would be absolutely terrible.

@albertzaharovits
Copy link
Contributor Author

albertzaharovits commented Aug 15, 2018

Here is a generous assortment of log lines, as they are generated in the #31931 PR :

{"timestamp":"2018-08-15T20:53:50,372", "event.type":"transport", "event.action":"authentication_failed", "origin.type":"rest", "origin.address":"127.0.0.1", "action":"_action", "request.name":"MockIndicesRequest", "indices":"idx1,idx2", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,440", "host.ip":"0.0.0.0:3", "event.type":"transport", "event.action":"access_denied", "user.name":"_username", "user.realm":"authRealm", "user.roles":"pqNltU", "origin.type":"rest", "origin.address":"127.0.0.1", "action":"_action/bar", "request.name":"MockIndicesRequest", "indices":"idx1,idx2", "opaque_id":"Ce", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,485", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,490", "event.type":"transport", "event.action":"authentication_failed", "user.name":"_principal", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,491", "event.type":"transport", "event.action":"authentication_failed", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,492", "event.type":"transport", "event.action":"realm_authentication_failed", "user.name":"_principal", "origin.type":"transport", "origin.address":"0.0.0.0", "realm":"IlHf", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,776", "event.type":"transport", "event.action":"access_granted", "user.name":"_username", "user.realm":"authRealm", "user.roles":"xo", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,777", "event.type":"transport", "event.action":"access_denied", "user.name":"_username", "user.realm":"authRealm", "user.roles":"xo", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,778", "event.type":"transport", "event.action":"tampered_request", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,778", "event.type":"transport", "event.action":"tampered_request", "user.name":"_username", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,779", "event.type":"transport", "event.action":"run_as_granted", "user.name":"_username", "user.run_as.name":"_username", "user.realm":"authRealm", "user.roles":"xo", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,780", "event.type":"transport", "event.action":"run_as_denied", "user.name":"_username", "user.run_as.name":"running_as", "user.realm":"authRealm", "user.run_as.realm":"lookRealm", "user.roles":"xo", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,781", "event.type":"transport", "event.action":"authentication_success", "user.name":"_username", "origin.type":"transport", "origin.address":"0.0.0.0", "realm":"IlHf", "action":"_action", "request.name":"MockMessage", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,781", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,785", "event.type":"transport", "event.action":"tampered_request", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,785", "event.type":"transport", "event.action":"tampered_request", "user.name":"_username", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,791", "event.type":"transport", "event.action":"tampered_request", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,792", "event.type":"transport", "event.action":"tampered_request", "user.name":"_username", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,792", "event.type":"transport", "event.action":"run_as_granted", "user.name":"_username", "user.run_as.name":"_username", "user.realm":"authRealm", "user.roles":"xo", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,793", "event.type":"transport", "event.action":"run_as_denied", "user.name":"_username", "user.run_as.name":"running_as", "user.realm":"authRealm", "user.run_as.realm":"lookRealm", "user.roles":"xo", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,793", "event.type":"transport", "event.action":"authentication_success", "user.name":"_username", "origin.type":"local_node", "origin.address":"0.0.0.0:6", "realm":"IlHf", "action":"_action", "request.name":"MockIndicesRequest", "opaque_id":"oh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,817", "event.type":"rest", "event.action":"authentication_failed", "user.name":"_principal", "origin.type":"rest", "origin.address":"::1", "url.path":"_uri", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,849", "event.type":"transport", "event.action":"access_granted", "user.name":"_username", "user.realm":"authRealm", "user.roles":"hBRjWv", "origin.type":"transport", "origin.address":"127.0.0.1", "action":"internal:_action", "request.name":"MockMessage", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,865", "event.type":"rest", "event.action":"tampered_request", "origin.type":"rest", "origin.address":"::1", "url.path":"_uri", "url.query":"_param=baz", "opaque_id":"eHXy", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,902", "event.type":"transport", "event.action":"realm_authentication_failed", "user.name":"_principal", "origin.type":"rest", "origin.address":"127.0.0.1", "realm":"y", "action":"_action", "request.name":"MockMessage", "opaque_id":"jmzI", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,960", "event.type":"transport", "event.action":"access_granted", "user.name":"_system", "user.realm":"_reserved", "user.roles":"ZbLX", "origin.type":"rest", "origin.address":"127.0.0.1", "action":"internal:_action", "request.name":"MockIndicesRequest", "indices":"idx1,idx2", "opaque_id":"rV", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:50,973", "event.type":"transport", "event.action":"anonymous_access_denied", "origin.type":"rest", "origin.address":"127.0.0.1", "action":"_action", "request.name":"MockMessage", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,019", "host.ip":"0.0.0.0:15", "event.type":"transport", "event.action":"run_as_granted", "user.name":"_username", "user.run_as.name":"running as", "user.realm":"authRealm", "user.run_as.realm":"lookRealm", "user.roles":"L", "origin.type":"local_node", "origin.address":"0.0.0.0:15", "action":"_action", "request.name":"MockMessage", "opaque_id":"woq", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,046", "event.type":"ip_filter", "event.action":"connection_granted", "origin.type":"ip_filter", "origin.address":"127.0.0.1", "transport.profile":"IEJu", "rule":"allow default:accept_all", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,058", "event.type":"transport", "event.action":"authentication_success", "user.name":"_username", "origin.type":"transport", "origin.address":"127.0.0.1", "realm":"GZkVN", "action":"_action", "request.name":"MockMessage", "opaque_id":"r", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,076", "event.type":"rest", "event.action":"realm_authentication_failed", "user.name":"_principal", "origin.type":"rest", "origin.address":"127.0.0.1", "realm":"usmY", "url.path":"_uri", "url.query":"_param=baz", "opaque_id":"IQgi", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,087", "event.type":"transport", "event.action":"access_granted", "user.name":"running_as", "user.run_by.name":"_username", "user.realm":"lookRealm", "user.run_by.realm":"authRealm", "user.roles":"FHHq", "origin.type":"transport", "origin.address":"0.0.0.0", "action":"_action", "request.name":"MockIndicesRequest", "indices":"idx1,idx2", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,102", "host.ip":"0.0.0.0:21", "event.type":"ip_filter", "event.action":"connection_denied", "origin.type":"ip_filter", "origin.address":"127.0.0.1", "transport.profile":"wcvzs", "rule":"deny _all", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,120", "host.ip":"0.0.0.0:22", "event.type":"rest", "event.action":"authentication_success", "user.name":"_username", "origin.type":"rest", "origin.address":"127.0.0.1", "realm":"OX", "url.path":"_uri", "url.query":"foo=bar&evac=true", "request.body":"{ \"key\": \"value\" ", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,159", "host.ip":"0.0.0.0:23", "event.type":"rest", "event.action":"authentication_failed", "origin.type":"rest", "origin.address":"127.0.0.1", "url.path":"_uri", "url.query":"bar=baz", "opaque_id":"JQcU", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,185", "host.ip":"0.0.0.0:24", "event.type":"transport", "event.action":"authentication_failed", "user.name":"_principal", "origin.type":"transport", "origin.address":"127.0.0.1", "action":"_action", "request.name":"MockMessage", "opaque_id":"gAd", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,200", "event.type":"transport", "event.action":"tampered_request", "origin.type":"transport", "origin.address":"127.0.0.1", "action":"_action", "request.name":"MockMessage", "opaque_id":"Bh", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,214", "event.type":"transport", "event.action":"run_as_denied", "user.name":"_username", "user.run_as.name":"running as", "user.realm":"authRealm", "user.run_as.realm":"lookRealm", "user.roles":"dUbU", "origin.type":"rest", "origin.address":"127.0.0.1", "action":"_action", "request.name":"MockIndicesRequest", "indices":"idx1,idx2", "event.category":"elasticsearch-audit"}
{"timestamp":"2018-08-15T20:53:51,227", "event.type":"rest", "event.action":"anonymous_access_denied", "origin.type":"rest", "origin.address":"127.0.0.1", "url.path":"_uri", "request.body":"{ \"key\": \"value\" ", "event.category":"elasticsearch-audit"}

These are the complete formatted entries, there is no other preamble or sufix.
A few things to note:

  • entries are structured, all event fields are of the "key":"value" form; there is no human friendly phrase message which binds all the values together.
  • entries are formatted as valid, flat (no nested objects), JSONs, each on a single line
  • event fields are arranged in a specified order via the log4j2.properties config file . This is achieved by formatting the event using the PatternLayout instead of JSONLayout. The field order, as well as the layout altogether, may be changed at runtime in the log4j conf file.
  • fields without values (null or empty) are not printed. This is to make good use of the structured format. If we were to print empty or missing values we would have reorganized the log as a wide table, with field names as table headers, and values would be positional, with placeholders for missing values, i.e. the usual tabular log format. However, entries here vary a lot in the set of fields, allegedly resulting in a too wide tabular format, populated with sparse entries. Therefore it is preferred to have structured entries.
  • request.body is printed as an ordinary JSON escaped string.

Besides the low level questions, such as field names, order and data types, which are tracked in the #31931 PR, there is an important question still to be discussed:
Should this new format replace the existing format? Or should it be printed on a new file alongside the existing one? Is completely changing the format too much for a minor version upgrade? I know adding new fields to the log entries, between minors, is alright, but this is at a different level. Should we view this log as any other and therefore have no bwc binding?

@tvernum
Copy link
Contributor

tvernum commented Aug 17, 2018

I think we should be reluctant to make this sort of change to the logfile audit output in a minor release. I don't think it's a flat-out "no way", but we have other options and we should use them.

I would lean towards:

  • a new output type ("file", or "json-file", or something)
  • name it "{cluster}_audit.log" rather than _access.log (so users can technically have both while they transition)
  • deprecate the "logfile" output with a view to removing it in either 7.0 or 8.0 depending on how generous we feel.

@albertzaharovits
Copy link
Contributor Author

albertzaharovits commented Aug 20, 2018

I tend to agree with we have other options and we should use them.

But I don't like to have another output type because I think we can get away without writing code for this.
Instead I propose we ship with two file appenders in the log4j2.properties file:

  1. the existing layout written to the _access.log rolling file
  2. the new JSON layout written to the _audit.log rolling file (new file)

Enabling both is probably suboptimal because of wasted resources. Ultimately nobody would use both. I think we should enable the new appender only, the previous one being commented out. This way we are more explicit that something changed (the file is not there as opposed to the content format changed), and if this comes as a surprise (because of not reading the release change notes) users can switch on the previous behavior at runtime, changing the log4j file. If we're defaulting to enabling the existing format, and commenting out the new one, we are implicitly delaying it's adoption (untill we decide we don't want our stack to parse the existing format, since beats has to support the default), as I don't think users will like to tinker with log4j files if everything "works".
Enabling both, although wastes resources, has zero impact on the users feeding on the file audit trail and allows beats to read structured entries soonest.

In a way, I propose to force the new format onto our users, and be very obvious about it, while at the same time, making it easy for them to keep using the deprecated format.

@jaymode
Copy link
Member

jaymode commented Aug 20, 2018

Enabling both, although wastes resources, has zero impact on the users feeding on the file audit trail and allows beats to read structured entries soonest.

I suggest we ship with both outputs enabled for 6.x and remove the old output for 7. I think there is even a possibility that we can programmatically get the appenders for that logger, see if the old format is enabled, and log the deprecation message in 6.x. Existing consumers will work and the beats consumer will work as well with the downside of more disk usage and i/o.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants