Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"RuntimeError: output is too large" when extracting meshes with 1024x1024 models #14

Closed
bluestyle97 opened this issue Sep 1, 2022 · 2 comments
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers

Comments

@bluestyle97
Copy link

Hi, thanks for your excellent work! When I tried to use the command in README.md to extract a mesh with a 1024x1024 pre-trained model (i.e., FFHQ1024, MetFaces), I got the following error (I'm using a 40GB A100 GPU and extracting meshes with a low-resolution pre-trained model works properly.):

Traceback (most recent call last):
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/eval/vis/extract_mesh.py", line 302, in <module>
    main(opt)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/eval/vis/extract_mesh.py", line 255, in main
    mesh = generate_mesh(
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/eval/vis/extract_mesh.py", line 95, in generate_mesh
    tmp_mpi_rgbas = gen(
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/networks/networks_cond_on_pos_enc.py", line 1323, in forward
    img = self.synthesize(ws=ws, n_planes=n_planes, mpi_xyz_coords=mpi_xyz_coords, xyz_coords_only_z=xyz_coords_only_z,
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/networks/networks_cond_on_pos_enc.py", line 1295, in synthesize
    img = self.synthesis(ws, xyz_coords=mpi_xyz_coords, enable_feat_net_grad=enable_syn_feat_net_grad, xyz_coords_only_z=xyz_coords_only_z, n_planes=n_planes, **synthesis_kwargs)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/networks/networks_cond_on_pos_enc.py", line 1205, in forward
    x, img = block(x, img, cur_ws, xyz_coords=tmp_xyz_coords, xyz_coords_only_z=xyz_coords_only_z, n_planes=n_planes, enable_feat_net_grad=enable_feat_net_grad, **block_kwargs)
  File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/networks/networks_cond_on_pos_enc.py", line 792, in forward
    img = upfirdn2d.upsample2d(img, self.resample_filter)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/torch_utils/ops/upfirdn2d.py", line 343, in upsample2d
    return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/torch_utils/ops/upfirdn2d.py", line 163, in upfirdn2d
    return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f)
  File "/group/30042/jialexu/projects/ml-gmpi/gmpi/models/torch_utils/ops/upfirdn2d.py", line 237, in forward
    y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
RuntimeError: output is too large

Do you have any advice to fix this error? Thanks in advance.

@Xiaoming-Zhao
Copy link
Collaborator

Xiaoming-Zhao commented Sep 1, 2022

May I know your command for this? Maybe try to set chunk_n_planes to 32 or 64 instead of -1?

Essentially, -1 means directly querying 1024 planes of resolution 1024x1024, which may be too large. If you set chunk_n_planes to 32, it will query 32 planes by 32 planes.

if chunk_n_planes == -1:

@bluestyle97
Copy link
Author

I modify the chunk_n_planes from -1 to 64 and it works, thanks a lot!

@Xiaoming-Zhao Xiaoming-Zhao added the help wanted Extra attention is needed label Sep 1, 2022
@fangchangma fangchangma added documentation Improvements or additions to documentation good first issue Good for newcomers and removed help wanted Extra attention is needed labels Sep 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants