Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

1.11 - import_logs.py broken parser #3805

Closed
anonymous-piwik-user opened this Issue · 8 comments

3 participants

@anonymous-piwik-user

There's broken parser since 1.11

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combined

1.2.3.4 - - [07/Mar/2013:08:46:03 +0100] "GET /19.pdf HTTP/1.0" 200 4324023 "-" "Apache-HttpClient/4.2.1 (java 1.5)" 276 4324023
5.6.7.8 - - [07/Mar/2013:05:12:44 +0100] "GET /index.htm HTTP/1.1" 206 33106 "http://refer" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.152 Safari/537.22" 437 33106

1.10.1

Direct Entry http://somesite/19.pdf
Website: refer http://somesite/index.htm

1.11

Direct Entry http://somesite/19.pdf%20HTTP/1.0%22%20200%204324023%20%22-%22%20%22Apache-HttpClient/4.2.1%20(java
Direct Entry http://somesite/

@mattab
Owner

In 2debefd: Refs #3805 Adding test and it's working OK so cannot replicate problem...

@mattab
Owner

please submit failing test or more info as it seems to work 2debefd#L0R14

@anonymous-piwik-user

Can you try import attached gentoo.log directly?

It fails for me with default import_logs.py. When I change -

        self.regex = re.compile(regex + '\s*$') # make sure regex includes end of line

back to what was there in 1.10.1 it works

        self.regex = re.compile(regex)

so I guess the problem is somewhere there, but I cannot figure out what.

thanks

@diosmosis
Collaborator

@matt, I found the issue w/ this bug and uploaded a fix in master...3805, can you review? Here's an explanation of the fix:

Seems that the change of order to the format exposed a couple bugs. First the change I committed before isn't flexible enough w/ log files (like gentoo.log which is ncsa_extended w/ two extra fields). Also, regex quantifiers are greedy, so now that common gets tested before ncsa_extended, 'common' matches w/ incorrect groups and ncsa_extended is never tried.

I've reverted my original change (the '\s*$' one) and modified the format autodetection logic to use the format that returns the most groups (ie, the one that matches and returns the most information). I've also modified the regexes to account for regex greediness. There are some extra tests and I fixed a bug w/ the S3 regex.

@mattab
Owner

all looks good to me!

@diosmosis
Collaborator

@matt Ok, greediness was only an issue w/ my change, so I've reverted that and added some more tests. I've pushed it again so you can take a look if you want. Will commit tomorrow.

@diosmosis
Collaborator

In 62b43d8: Fixes #3805, reverted change in log importer that looked for end-of-line after format regex match and modified format autodetection logic to pick the format based on whether the format matches and the number of groups returned in the match.

Notes:

  • Added several more tests to log importer tests.py. Added tests for checking format of log files w/ extra junk info on log lines. Added individual tests for parsing regex format.
  • Modified log files used in ImportLogs test, added extra junk info to end of some lines.
  • Fixed failing test in tests.py for the S3 log file format.
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.