Skip to content
This repository has been archived by the owner. It is now read-only.

Web interface showing "0 messages found" when specifying absolute time stamp #1266

Closed
linuxprofessor opened this issue May 14, 2015 · 10 comments
Closed
Assignees
Labels
bug
Milestone

Comments

@linuxprofessor
Copy link

@linuxprofessor linuxprofessor commented May 14, 2015

When I specify an absolute timestamp in the web interface and search, I always get "0 messages found".

Elasticsearch query before absolute time selection:

{
    "from": 0,
    "size": 100,
    "query": {
        "match_all": {}
    },
    "post_filter": {
        "bool": {
            "must": {
                "range": {
                    "timestamp": {
                        "from": "2015-05-14 11:50:33.145",
                        "to": "2015-05-14 19:50:33.145",
                        "include_lower": true,
                        "include_upper": true
                    }
                }
            }
        }
    },
    "sort": [
        {
            "timestamp": {
                "order": "desc"
            }
        }
    ],
    "highlight": {
        "require_field_match": false,
        "fields": {
            "*": {
                "fragment_size": 0,
                "number_of_fragments": 0
            }
        }
    }
}

For some reason it seems to be "stuck" at UTC, since the time here (and the time zone config) it set to CEST, 2 hours later that is.

Elasticsearch query after time selection:

{
    "from": 0,
    "size": 100,
    "query": {
        "match_all": {}
    },
    "post_filter": {
        "bool": {
            "must": {
                "range": {
                    "timestamp": {
                        "from": "2015-05-03 04:10:17.000",
                        "to": "2015-05-11 03:57:35.000",
                        "include_lower": true,
                        "include_upper": true
                    }
                }
            }
        }
    },
    "sort": [
        {
            "timestamp": {
                "order": "desc"
            }
        }
    ],
    "highlight": {
        "require_field_match": false,
        "fields": {
            "*": {
                "fragment_size": 0,
                "number_of_fragments": 0
            }
        }
    }
}

Graylog versions:

  • Graylog server: 1.0.2 (e5432f1) (Jever)
  • Graylog web interface and Java: v1.0.2 (e5432f1) (Oracle Corporation 1.7.0_79 / Linux 2.6.32-504.8.1.el6.x86_64)

Time zone info:


User admin:
    2015-05-14 21:47:51.411 +02:00
Web browser:
    2015-05-14 21:47:52.355 +02:00
Default JDK/JRE:
    2015-05-14 21:47:51.411 +02:00
Graylog web interface:
    2015-05-14 21:47:51.411 +02:00
Graylog master server:
    2015-05-14 21:47:51.411 +02:00 
@joschi joschi added the bug label May 15, 2015
@joschi joschi added this to the 1.1.0 milestone May 15, 2015
@joschi
Copy link
Contributor

@joschi joschi commented May 15, 2015

Might be related to Graylog2/graylog2-server#779 and Graylog2/graylog2-server#1132.

@linuxprofessor Could you please recalculate the index time ranges (System -> Indices -> Maintenance -> Recalculate index ranges) and check if the query still return no result?

@linuxprofessor
Copy link
Author

@linuxprofessor linuxprofessor commented May 15, 2015

I've tried that, didn't make a difference at the time.

After some more trial and error I found that restarting the browser and clearing all cache/cookies after rebuilding solved the issue. I don't know if this really can be classified as a bug, but users should probably be made aware of this if someone else runs into the same issue and rebuilding the index doesn't make the trick right away.

@huksley
Copy link

@huksley huksley commented May 21, 2015

Having the same issue. Rebuilding helps but not for long. We have shards 1 per 1hour, i.e. rotation_strategy = time, elasticsearch_max_time_per_index = 1h

@kroepke
Copy link
Contributor

@kroepke kroepke commented Jun 1, 2015

Has this appeared prior to 1.1?
I still could not reproduce it, and the relevant code has not been touched in quite a while.

@huksley
Copy link

@huksley huksley commented Jun 3, 2015

Yes it is in 1.0.1. We have it quite frequently with indices per hour and with lot of messages in index.

@kroepke
Copy link
Contributor

@kroepke kroepke commented Jun 3, 2015

@huksley Ok, thanks.
Does it only happen when rotating the indices, or even in between?

If the former is the case, you might want to turn on disable_index_range_calculation=true in your server config file.
This will be default starting with 1.1.0 and it severely decreases the time to build search metadata after index rotation.

@drewmmiranda
Copy link

@drewmmiranda drewmmiranda commented Aug 12, 2015

So I'm actively having this issue when the following takes place:

  • Retention rolls over to a new index
  • The oldest index is deleted

Absolute time rang search is empty, BUT manually performing recalculate index ranges fixes it until the bug occurs again.

IF a new index is created and the oldest index is not deleted based on retention rules, this bug does not occur.

Is this what is being addressed in Fix several index retention problems #1208 ? I ask because all issues related to this appear to be closed, yet this issue is not resolved.

Edit it appears #779 is still open in reference to this.

@bernd
Copy link
Contributor

@bernd bernd commented Aug 13, 2015

@drewmmiranda Most of these issues are fixed in the upcoming 1.2 release, that's why they are closed.

@rrtj3
Copy link

@rrtj3 rrtj3 commented Oct 28, 2015

I'm on 1.2.1 and am still having this exact issue. Absolute time searches return 0 hits when there are lots of matching logs that turn up with relative searches. Would really love to have this fixed.

@drewmmiranda
Copy link

@drewmmiranda drewmmiranda commented Oct 29, 2015

Same. I opened a new issue on it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
7 participants
You can’t perform that action at this time.