Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate paths are inserting. #83

Closed
sasank6192 opened this issue Jul 22, 2020 · 6 comments
Closed

Duplicate paths are inserting. #83

sasank6192 opened this issue Jul 22, 2020 · 6 comments

Comments

@sasank6192
Copy link

Hi,
When I was executing diskover on my storage mount point. I do see some duplicate parent path is inserting when crawling.
Every new run different duplicate records. Can you help me out this issue. My understanding while split jobs to worker nodes, same files are assigning.

Regards,
Sasanka

@varontron
Copy link

varontron commented Jul 28, 2020

I may be having this problem as well. When I run --finddupes, diskover-web is reporting hundreds of instances of the same file as a duplicate of itself. Drilling down on the files reveals duplicate entries for the same file.

EDIT: I dropped and recreated my index without the splitfiles and chunkfiles options. I am still getting false positive duplicates while running finddupes

@shirosaidev
Copy link
Collaborator

What version of diskover and diskover-web are you running? Can you update to latest if not already and see if this same issue exists. Just to confirm, you are building the index first and then running diskover using --finddupes as a secondary post command after the crawl finishes and the index is done building?

@varontron
Copy link

Some more info on this issue which is still happening: Here is search result showing 2 different workers indexed the same file 21 minutes apart. All metadata is identical. It would be great to resolve this obviously but a workaround to eliminate the duplicate entries from the index would suffice in the meantime.

{
  "took": 1,
  "timed_out": false,
  "_shards": {
    "total": 7,
    "successful": 7,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 2,
    "max_score": 15.073035,
    "hits": [
      {
        "_index": "diskover-index",
        "_type": "file",
        "_id": "AXVHRGHuCzgpQBKFqDAe",
        "_score": 15.073035,
        "_source": {
          "last_modified": "2019-04-16T11:33:10",
          "filename": "2019_04-10_CD8Tcells_CblB_KD_vis_SC_Fr1_8.pdResult",
          "indexing_date": "2020-10-20T18:26:23.412843",
          "path_parent": "/vol/S/RD/Biology/LB-ProteomeD1/Mirek/2019_04_10_CD8Tcells_CblB_KD_SC",
          "hardlinks": 1,
          "last_access": "2019-04-19T06:21:50",
          "owner": "dvaron",
          "worker_name": "ip-172-31-200-227.25688",
          "last_change": "2019-08-16T16:54:13",
          "extension": "pdresult",
          "inode": "281474976797872",
          "filesize": 25064509440,
          "tag": "",
          "group": "dvaron",
          "tag_custom": "",
          "dupe_md5": "",
          "filehash": "8f04cef5fbb8e1abb9e88e43378a6c78"
        }
      },
      {
        "_index": "diskover-index",
        "_type": "file",
        "_id": "AXVHVoaWCzgpQBKFrGRH",
        "_score": 15.073035,
        "_source": {
          "tag": "",
          "inode": "281474976797872",
          "filesize": 25064509440,
          "last_modified": "2019-04-16T11:33:10",
          "owner": "dvaron",
          "dupe_md5": "",
          "worker_name": "ip-172-31-200-227.25718",
          "filehash": "8f04cef5fbb8e1abb9e88e43378a6c78",
          "path_parent": "/vol/S/RD/Biology/LB-ProteomeD1/Mirek/2019_04_10_CD8Tcells_CblB_KD_SC",
          "filename": "2019_04-10_CD8Tcells_CblB_KD_vis_SC_Fr1_8.pdResult",
          "hardlinks": 1,
          "last_access": "2019-04-19T06:21:50",
          "indexing_date": "2020-10-20T18:47:07.242652",
          "last_change": "2019-08-16T16:54:13",
          "group": "dvaron",
          "tag_custom": "",
          "extension": "pdresult"
        }
      }
    ]
  }
}

@shirosaidev
Copy link
Collaborator

@varontron what version of redis? what version of redis and rq python libraries? Please verify you are using the same version of all in requirements.txt (python req.). Are you using Python 3.5+ ?

elasticsearch==5.5.3
requests==2.23.0
scandir==1.10.0
progressbar2==3.51.3
redis==3.5.0
rq==1.3.0

@shirosaidev
Copy link
Collaborator

also, are you running diskover and all worker bots on the same host or diff hosts ? What is the full diskover.py command you are using?

@shirosaidev
Copy link
Collaborator

Please try to stop and kill any remaining bots that are still listed in redis/rq and indexing again after you have verified the above requirements are same. To verify all bots are stopped and not in redis/rq, please refer to this page for stuck bots. Also please verify all rq queues are empty before indexing again to see if this gets rid of the issue of duplicate entries in the index. I have not heard of this issue before.
https://github.com/shirosaidev/diskover/wiki/Worker-bots-and-batch-sizes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants