-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I was unable to complete compilation #65
Comments
Hello and thank you for reporting the error that you encountered.
It looks like you might be trying to install an older version of
Deepwave. Are you able to use a more recent release? Version v0.0.19
and later do not generally require the user to compile the code, which
should avoid problems like the one that you encountered. If you can, I
recommend trying to install the latest version of Deepwave. If you
need to install a version before v0.0.10 (when the Deepwave interface
was different) in order to use some existing code, then I think there
are two possible options: you could try installing instead on Linux,
or I could provide you with a wrapper to convert calls to the old
version of Deepwave into ones compatible with the new version of
Deepwave.
|
I'm using deepwave version 0.0.10, in order to use the 'Propagator' function, I still don't understand how it works for the new version, so it's a good idea to use a wrapper |
If you wish to use the old "Propagator" interface, then you need to use v0.0.9 or earlier - the interface changed to the new one in v0.0.10 (and has remained the same since then). I think this wrapper should work to convert from old-style calls to new ones: import torch
import deepwave
class Propagator():
def __init__(self, model, dx, pml_width=None, survey_pad=None, vpmax=None):
"""Wrapper to call Deepwave's scalar propagator
Args:
model: A dictionary containing a 'vp' key whose value is a
[ny, nx] shape Float Tensor containing the velocity model.
dx: A float or list of floats containing cell spacing in each
dimension.
pml_width: An int or list of ints specifying number of cells to use
for the PML. This will be added to the beginning and end of each
propagating dimension. If provided as a list, it should be of
length 6, with each sequential group of two integer elements
referring to the beginning and end PML width for a dimension.
The last two entries are ignored. Optional, default 20.
survey_pad: A float or None, or list of such with 2 elements for each
dimension, specifying the padding (in units of dx) to add.
In each dimension, the survey (wave propagation) area for each
batch of shots will be from the left-most source/receiver minus
the left survey_pad, to the right-most source/receiver plus the
right survey pad, over all shots in the batch, or to the edges of
the model, whichever comes first. If a list, it specifies the
left and right survey_pad in each dimension. If None, the survey
area will continue to the edges of the model. If a float, that
value will be used on the left and right of each dimension.
Optional, default None.
vpmax: A float specifying the velocity to use when calculating the
internal time step size using the CFL condition.
Optional, default None which will use the maximum in the
provided model.
"""
if 'vp' not in model:
raise RuntimeError("model should contain a 'vp' key")
if not isinstance(model['vp'], torch.Tensor):
raise RuntimeError("model should be a Tensor")
if not model['vp'].ndim == 2:
raise RuntimeError("model should have two dimensions")
if isinstance(pml_width, list):
pml_width = pml_width[:4]
self.vp = model['vp']
self.dx = dx
self.pml_width = pml_width
self.survey_pad = survey_pad
self.vpmax = vpmax
def __call__(self, source_amplitudes, source_locations, receiver_locations,
dt):
if isinstance(self.dx, list):
source_locations[..., 0] /= self.dx[0]
source_locations[..., 1] /= self.dx[1]
receiver_locations[..., 0] /= self.dx[0]
receiver_locations[..., 1] /= self.dx[1]
else:
source_locations /= self.dx
receiver_locations /= self.dx
return deepwave.scalar(self.vp,
self.dx,
dt,
source_amplitudes=source_amplitudes.movedim(
0, -1),
source_locations=source_locations.long(),
receiver_locations=receiver_locations.long(),
pml_width=self.pml_width,
survey_pad=self.survey_pad,
max_vel=self.vpmax)[-1].movedim(-1, 0) Please let me know if you have any problems. |
It works very well, thank you |
That is good news. I will then close this Issue, but please feel free to reopen it if you encounter any problems. |
platform is windows11,pycharm2022.1(professional.)
cuda toolkit11.3, pytorch1.11.0 , Driver Version: 472.12(RTX3060Ti) CUDA Version: 11.4
This is an error message
error_info.txt
The text was updated successfully, but these errors were encountered: