Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I was unable to complete compilation #65

Closed
jiepainanhai opened this issue Oct 21, 2023 · 5 comments
Closed

I was unable to complete compilation #65

jiepainanhai opened this issue Oct 21, 2023 · 5 comments

Comments

@jiepainanhai
Copy link

platform is windows11,pycharm2022.1(professional.)
cuda toolkit11.3, pytorch1.11.0 , Driver Version: 472.12(RTX3060Ti) CUDA Version: 11.4
This is an error message
error_info.txt

@ar4
Copy link
Owner

ar4 commented Oct 21, 2023 via email

@jiepainanhai
Copy link
Author

I'm using deepwave version 0.0.10, in order to use the 'Propagator' function, I still don't understand how it works for the new version, so it's a good idea to use a wrapper

@ar4
Copy link
Owner

ar4 commented Oct 21, 2023

If you wish to use the old "Propagator" interface, then you need to use v0.0.9 or earlier - the interface changed to the new one in v0.0.10 (and has remained the same since then).

I think this wrapper should work to convert from old-style calls to new ones:

import torch
import deepwave


class Propagator():

    def __init__(self, model, dx, pml_width=None, survey_pad=None, vpmax=None):
        """Wrapper to call Deepwave's scalar propagator

        Args:
            model: A dictionary containing a 'vp' key whose value is a
                [ny, nx] shape Float Tensor containing the velocity model.
            dx: A float or list of floats containing cell spacing in each
                dimension.
            pml_width: An int or list of ints specifying number of cells to use
                for the PML. This will be added to the beginning and end of each
                propagating dimension. If provided as a list, it should be of
                length 6, with each sequential group of two integer elements
                referring to the beginning and end PML width for a dimension.
                The last two entries are ignored. Optional, default 20.
            survey_pad: A float or None, or list of such with 2 elements for each
                dimension, specifying the padding (in units of dx) to add.
                In each dimension, the survey (wave propagation) area for each
                batch of shots will be from the left-most source/receiver minus
                the left survey_pad, to the right-most source/receiver plus the
                right survey pad, over all shots in the batch, or to the edges of
                the model, whichever comes first. If a list, it specifies the
                left and right survey_pad in each dimension. If None, the survey
                area will continue to the edges of the model. If a float, that
                value will be used on the left and right of each dimension.
                Optional, default None.
            vpmax: A float specifying the velocity to use when calculating the
                internal time step size using the CFL condition.
                Optional, default None which will use the maximum in the
                provided model.
        """
        if 'vp' not in model:
            raise RuntimeError("model should contain a 'vp' key")

        if not isinstance(model['vp'], torch.Tensor):
            raise RuntimeError("model should be a Tensor")

        if not model['vp'].ndim == 2:
            raise RuntimeError("model should have two dimensions")

        if isinstance(pml_width, list):
            pml_width = pml_width[:4]

        self.vp = model['vp']
        self.dx = dx
        self.pml_width = pml_width
        self.survey_pad = survey_pad
        self.vpmax = vpmax

    def __call__(self, source_amplitudes, source_locations, receiver_locations,
                 dt):
        if isinstance(self.dx, list):
            source_locations[..., 0] /= self.dx[0]
            source_locations[..., 1] /= self.dx[1]
            receiver_locations[..., 0] /= self.dx[0]
            receiver_locations[..., 1] /= self.dx[1]
        else:
            source_locations /= self.dx
            receiver_locations /= self.dx
        return deepwave.scalar(self.vp,
                               self.dx,
                               dt,
                               source_amplitudes=source_amplitudes.movedim(
                                   0, -1),
                               source_locations=source_locations.long(),
                               receiver_locations=receiver_locations.long(),
                               pml_width=self.pml_width,
                               survey_pad=self.survey_pad,
                               max_vel=self.vpmax)[-1].movedim(-1, 0)

Please let me know if you have any problems.

@jiepainanhai
Copy link
Author

It works very well, thank you

@ar4
Copy link
Owner

ar4 commented Oct 21, 2023

That is good news. I will then close this Issue, but please feel free to reopen it if you encounter any problems.

@ar4 ar4 closed this as completed Oct 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants