Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference via demo.py failed due to argument length #936

Closed
CMobley7 opened this issue Feb 24, 2020 · 4 comments
Closed

Inference via demo.py failed due to argument length #936

CMobley7 opened this issue Feb 24, 2020 · 4 comments

Comments

@CMobley7
Copy link

CMobley7 commented Feb 24, 2020

Instructions To Reproduce the Issue:

  1. what exact command you run:
python3 demo/demo.py --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml --input list_of_filenames --output /path/to/output/folder --opts 'MODEL.WEIGHTS /path/to/model/

list_of_filenames is built up via the following code.

files = []
    file_extensions = ["*.jpg", "*.jpeg", "*.png", "*.tif", "*.gif"]
    for extension in file_extensions:
        for filename in Path(/path/to/files).glob(extension):
            files.append(str(filename))
  1. what you observed (including the full logs):
OSError: [Errno 7] Argument list too long: 'python3'

Expected behavior:

I would expect inference to be performed on all images listed after --input for the --config and MODEL.WEIGHTS specified and the results places in --output. However, it appears that my command is too long. Is there anyway that an addition argument, --input_dir could be added to demo.py to allow for inference on directories or is there a more appropriate way to do this? Also, is there a way to enable multi-gpu support for inference via demo.py or is there a more appropriate way to do this as well?

Environment:

------------------------  ---------------------------------------------------------
sys.platform              linux
Python                    3.6.9 (default, Nov  7 2019, 10:44:02) [GCC 8.3.0]
numpy                     1.18.1
detectron2                0.1 @/podc/src/detectron2/detectron2
detectron2 compiler       GCC 7.4
detectron2 CUDA compiler  10.1
detectron2 arch flags     sm_60
DETECTRON2_ENV_MODULE     <not set>
PyTorch                   1.4.0 @/usr/local/lib/python3.6/dist-packages/torch
PyTorch debug build       False
CUDA available            True
GPU 0,1,2,3               Tesla P100-SXM2-16GB
CUDA_HOME                 /usr/local/cuda
NVCC                      Cuda compilation tools, release 10.1, V10.1.243
Pillow                    7.0.0
torchvision               0.5.0 @/usr/local/lib/python3.6/dist-packages/torchvision
torchvision arch flags    sm_35, sm_50, sm_60, sm_70, sm_75
------------------------  ---------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CUDA Runtime 10.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  - CuDNN 7.6.3
  - Magma 2.5.1
  - Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF
@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Feb 24, 2020

You can pass a glob pattern such as --input 'abcd/*.jpg'.

demo is just a simple demo - no complicated features like multigpu support are meant to be added there

@CMobley7
Copy link
Author

Thank @ppwwyyxx, is there a more appropriate script for performing just inference? train_net.py allows just evaluation, which includes inference, with multi-gpu support. If I don't send in a valuation dataset will this perform just inference or will it break because it requires a validation dataset?

@ppwwyyxx
Copy link
Contributor

train_net.py only performs training and evaluation.
demo.py is an inference example.

@CMobley7
Copy link
Author

CMobley7 commented Mar 5, 2020

If anyone stumbles upon this issue trying to perform inference on a directory of images. You can either write your own script, which isn't difficult (see https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py and #282) or if you'd rather stick to the default scripts; so that, you can continue to pull master without worrying about breaking changes, you can also just make an ann file of your data on the fly with the following code, register it with register_coco_instances, and use train_net.py, with --eval-only MODEL.WEIGHTS /path/to/.pth/file DATASETS.TEST name_of_registered_dataset

from pathlib import Path

from PIL import Image

images_info = []
file_extensions = ["*.jpg", "*.jpeg", "*.png", "*.tif", "*.gif"]
for extension in file_extensions:
    for filename in Path("/path/to/data/dir").glob(extension):
        image = Image.open(filename)
        width, height = image.size
        images_info.append([str(filename), int(height), int(width)])

images = [
    {
        "file_name": image_info[0],
        "height": image_info[1],
        "width": image_info[2],
        "id": i,
    }
    for i, image_info in enumerate(images_info, start=1)
]

infer_ann = {
    "categories": OUTPUT_CATEGORIES,
    "annotations": [],
    "images": images,
}

with open("/desired/output/file/path.json", "w") as f:
    json.dump(infer_ann, f)

where OUTPUT_CATEGORIES is

from pycocotools.coco import COCO

coco = COCO("path/to/ann/model/was/trained/on")
OUTPUT_CATEGORIES = coco.loadCats(coco.getCatIds())

I'd suggest saving OUTPUT_CATEGORIES when you train a model. So, you can just read in a .json

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 1, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants