Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EOFError: Ran out of input #20

Open
momo1986 opened this issue Apr 17, 2019 · 3 comments
Open

EOFError: Ran out of input #20

momo1986 opened this issue Apr 17, 2019 · 3 comments

Comments

@momo1986
Copy link

I try to use it in Python3.

However, the error is reported:

python train.py --image_batch 32 --video_batch 32 --use_infogan --use_noise --noise_sigma 0.1 --image_discriminator PatchImageDiscriminator --video_discriminator CategoricalVideoDiscriminator --print_every 100 --every_nth 2 --dim_z_content 50 --dim_z_motion 10 --dim_z_category 4 /slow/junyan/VideoSynthesis/mocogan/data/actions logs/actions
{'--batches': '100000',
'--dim_z_category': '4',
'--dim_z_content': '50',
'--dim_z_motion': '10',
'--every_nth': '2',
'--image_batch': '32',
'--image_dataset': '',
'--image_discriminator': 'PatchImageDiscriminator',
'--image_size': '64',
'--n_channels': '3',
'--noise_sigma': '0.1',
'--print_every': '100',
'--use_categories': False,
'--use_infogan': True,
'--use_noise': True,
'--video_batch': '32',
'--video_discriminator': 'CategoricalVideoDiscriminator',
'--video_length': '16',
'': '/slow/junyan/VideoSynthesis/mocogan/data/actions',
'<log_folder>': 'logs/actions'}
/root/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
/slow/junyan/VideoSynthesis/mocogan/data/actions/local.db
Traceback (most recent call last):
File "train.py", line 104, in
dataset = data.VideoFolderDataset(args[''], cache=os.path.join(args[''], 'local.db'))
File "/slow/junyan/VideoSynthesis/mocogan/src/data.py", line 24, in init
print(pickle.load(f))
EOFError: Ran out of input

Here is the code

class VideoFolderDataset(torch.utils.data.Dataset):
    def __init__(self, folder, cache, min_len=32):
        dataset = ImageFolder(folder)
        self.total_frames = 0
        self.lengths = []
        self.images = []
        print(cache)
        if cache is not None and os.path.exists(cache):
            with open(cache, 'rb') as f:
                print(pickle.load(f))
        else:
            for idx, (im, categ) in enumerate(
                    tqdm.tqdm(dataset, desc="Counting total number of frames")):
                img_path, _ = dataset.imgs[idx]
                shorter, longer = min(im.width, im.height), max(im.width, im.height)
                length = longer // shorter
                if length >= min_len:
                    self.images.append((img_path, categ))
                    self.lengths.append(length)

            if cache is not None:
                with open(cache, 'wb') as f:
                    pickle.dump((self.images, self.lengths), f)

        self.cumsum = np.cumsum([0] + self.lengths)
        print("Total number of frames {}".format(np.sum(self.lengths)))
@Aniket1998
Copy link

Facing a similar issue for Weizmann Action Dataset on batch sizes larger than 64

@vladyushchenko
Copy link
Contributor

vladyushchenko commented Jun 27, 2019

The accepted batch size depends on the dataset and your config.
Weizmann Action Dataset has 72 videos and since the drop_last=True in image loader and in video_loader, the max batch size is the dataset length.

To solve the issue, you can duplicate the data to cover your needed batch size (e.g. batch_size = 128, 72*2 > 128). Note that simply setting drop_last=False will not solve your issue.

@disanda
Copy link

disanda commented Apr 6, 2020

I solve the problem by edit the file of 'data.py' at line 22.
from:' if cache is not None and os.path.exitsis(cache):
to :'if (cache is not None) and (os.path.getsize(cache) != 0):'

because: the cache file maybe is a 0 byte files and meanwhile, it can not open to write

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants