Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kolmogorov atmosphere #105

Closed
joergdietrich opened this issue Apr 12, 2012 · 47 comments
Closed

Kolmogorov atmosphere #105

joergdietrich opened this issue Apr 12, 2012 · 47 comments
Assignees
Labels
base classes Related to GSObject, Image, or other base GalSim classes feature request Request for a new feature in GalSim

Comments

@joergdietrich
Copy link
Member

Implement an atmospheric PSF based on Kolmogorov turbulence. I'm assigning this to myself and plan to have it ready for the 2nd milestone. This will be done in python initially and may have to be rewritten in C++ if it is too slow, as discussed in #25.

@ghost ghost assigned joergdietrich Apr 12, 2012
@rmandelb
Copy link
Member

Sounds like a plan, thanks!

@joergdietrich
Copy link
Member Author

The increasingly epic discussion on issue #117 mentions image scaling,
and there is a related issue #82. I don't want to hijack these
discussions. Therefore I put my questions to the list.

I am trying to implement a simple long-exposure Kolmogorov PSF (#105)
and I looked for guidance how to scale the images I draw in the optics
module. I am thoroughly confused now.

SBProfile has a stepK() method. This has a different value than the
stepk computed by Barney in optics.py. Either of these must define the
pixel scale in Fourier space, right? What is the other? Isn't the
pixel scale in configuration space automatically determined by this?
What is the dx for then? And is the dx in GSObject.init the same
as in draw()?

@barnabytprowe
Copy link
Member

Hi Joerg, I know you were interested in a description of what stepK, maxK etc. are and how they relate to dx and the image size, in both SBProfile and the Python Optics module. I'll do my best to respond to the email queries here...

SBProfile has a stepK() method. This has a different value than the stepk computed by Barney in optics.py. Either of these must define the pixel scale in Fourier space, right? What is the other?

So the stepK is indeed the k-space step size, however I cannot find the stepk you are referring to in optics.py. In base.py I define a stepk_airy which I think is in fact the same as Gary defines for the specific SBAiry class of SBProfile. So, both are indeed the "pixel scale" in Fourier space.

Isn't the pixel scale in configuration space automatically determined by this?

In short, no. The pixel scale can be chosen by the user. Unless the user requests another pixel scale, the sensible choice of pixel scale in real space dx is determined by the maximum k value in Fourier space, maxK, i.e. the physical extent of the region in k-space mapped by the array representation of the Fourier spectrum.

I think the code generally defaults to the Nyquist value dx = M_PI / maxK (using Gary's definition of k, see docs/SBProfile.pdf), which guarantees the image is oversampled provided that the Fourier spectrum of the function in question was bandlimited at some |k| < maxK.

In the crazy inverted world of Fourier-real mappings, stepK is therefore related to the physical extent of the image in real space, i.e. number of pixels * dx. Thus, I actually then use stepk_airy to set the real space extent of the SBInterpolatedImage lookup table used to generate the OpticalPSF by interpolation: this dimension is proportional to the reciprocal of stepk_airy. If this dimension is too small, then in generating the lookup table we actually get image "folding", the real space counterpart to aliasing. I thus therefore use Gary's rule for the Airy function (the limiting case of perfect optics) and multiply by a simple factor (pad_factor, default = 2) to give my self some extra breathing space once aberrations are included.

Hmmm. Not very concise! I'd be happy to chat on Skype about this if you wish Joerg (will email my Skype name if that would be handy)!

@joergdietrich
Copy link
Member Author

This is very helpful! Thank you very much, indeed! A lof of the
information I was looking for in the DOxygen docs is
docs/SBProfile.pdf, I just realized. I'll let you know if I need
anything but this gives me plenty to work with right now.

-- Joerg

Jörg Dietrich jorgd@umich.edu http://www-personal.umich.edu/~jorgd/
Physics Department, University of Michigan, Tel. +1 734 615 4256
450 Church Street Fax +1 734 763 9694
Ann Arbor, MI 48109-1040, USA

@rmandelb
Copy link
Member

Ah ha, this suggests we should make sure that useful information from SBProfile.pdf ends up in the DOxygen docs.

@barnabytprowe
Copy link
Member

I've added this to the remit of #21, good suggestion.

@joergdietrich
Copy link
Member Author

I have removed this from Milestone 2. I won't be able to finish it on time and it's not holding anything up right now, as far as I can tell. I'll get to work on it next week.

@rmandelb
Copy link
Member

You're right that it's not holding anything up, so don't worry about it.

@barnabytprowe
Copy link
Member

Fine by me, thanks Joerg (and hope Munich is nice!)

@joergdietrich
Copy link
Member Author

I'm beginning to feel incredibly stupid and frustrated by this. I still don't understand what the interplay of the various parameters is. Why is the maxk of OpticalPSF different than the one reported by maxK()?

There's now some code outline in this branch. Why is the shape of .draw() different than (npix, npix) if dx=1?

@barnabytprowe
Copy link
Member

Perhaps the best thing to do for efficiency is:

i) Joerg, you and I talk on Skype today/tomorrow/as soon as is convenient
for you.

ii) Based on the places where we find my code has confused you so badly
(I'm sorry!), I will go into the functions and rename some variables and
add clearer comments.

My skype username is barnaby_rowe... When would be good for you? (Anyone
else is welcome to join btw!)

On Tue, May 22, 2012 at 9:04 AM, Jörg Dietrich <
reply@reply.github.com

wrote:

I'm beginning to feel incredibly stupid and frustrated by this. I still
don't understand what the interplay of the various parameters is. Why is
the maxk of OpticalPSF different than the one reported by maxK()?

There's now some code outline in this branch. Why is the shape of .draw()
different than (npix, npix) if dx=1?


Reply to this email directly or view it on GitHub:
#105 (comment)

Barnaby Rowe
Postdoctoral Research Associate

Jet Propulsion Laboratory
California Institute of Technology
MS 169-237
4800 Oak Grove Drive
Pasadena CA 91109
United States of America

Department of Physics & Astronomy
University College London
Gower Street
London WC1E 6BT
United Kingdom

@joergdietrich
Copy link
Member Author

Tomorrow at 2pm EDT? skype id joergpdietrich

@gbernstein
Copy link
Member

Joerg, I've just discovered this issue and can offer a little potential help since I had implemented a Kolmogorov PSF in an earlier version of the SBProfile code. Here is a snippet of what I had written:

const double KolmogorovFWHM=2.92;
class PsfKolmogorov  { 
public:
  PsfKolmogorov(double fwhm=1.):  k0(KolmogorovFWHM/fwhm) {}
  double kval(const double k) const {return exp(-pow(k/k0,1.666667));}
  double xval(const double x) const {
    throw PsfNotImplemented("Kolmogorov xval()");}
  double kMax() const {return 3.*k0;}
  double xMax() const {return 5./k0;}
 private:
  double k0;
};

The defining characteristic of Kolmogorov is that its Fourier Transform is a power law, exp(-pow(k/k0,1.666667)), where FHWM = 2.92/k0. You'll want to set kMax to the point where you're willing to neglect any higher k's. So one way to do it would be to solve

ALIAS_THRESHOLD = exp( -pow(maxK/k0, 5/3))

which for ALIAS_THRESHOLD=0.005 gives maxK = 2.7 k0. (In the code above this was hard-wired to 3*k0).

For stepK, you need to decide at what radius you'd be willing to "fold" the PSF onto itself. Then set stepK=pi / (fold radius). Above I had chosen to always go to at least radius 5/k0 = FWHM * (5/2.92).

The hard part about Kolmogorov is that the real-space radial function is not analytic. The Moffat approximation is nice, but it's not exact - looks like errors as big as 1% of peak in that Trujillo paper you reference. The other SB's have pretty high accuracy equivalence between their real-space and Fourier results, which we might wish to maintain. So we may wish to compute the real space Kolmogorov via DFT from the formula above. It would also be ok to leave the real-space function "unimplemented" so the DFT will simply be done on demand.

I'm happy to help as needed...

@barnabytprowe
Copy link
Member

2pm EDT tomorrow works fine for me Joerg, thank you.

And Gary: that's great, thanks, very useful... I really want to get involved in the atmosphere stuff more myself now too, it interests me!

@barnabytprowe
Copy link
Member

Hi Joerg,

So following our very helpful conversation, I've opened an issue about the clarity of the OpticalPSF kmax stuff (#166). However, before I open an issue about the _scale parameter setting (N.B. to all - will explain this shortly I promsie!), I'm just going to update this branch with the current version of master to check that this wasn't simply due to the presence of half finished code...

barnabytprowe added a commit that referenced this issue May 23, 2012
@barnabytprowe
Copy link
Member

Aha, so in fact Joerg now I look I do see that the scale of the BaseImage is set in the constructor in Image.h, and it's derived subclass constructors that follow this pattern, whereas it wasn't before. Someone must have spotted this.

However, the setting of _scale by the constructor is not propagated into the pysrc/Image.cpp Python wrappers, so this may still be worth an issue to clear up what is going on.

As a sanity check, would it be possible to scons -c on this branch and retry that short test we did? Before you were getting npix ~ 900 when not specifying a dx to draw(), and getting the expected and explained npix = 266 when setting dx=1. This dx=1 is now the default in the image constructor, and so it should be set as that in your image of the Kolmogorov PSF... Then, when SBInterpolatedImage uses it's draw method, the default is to use the dx of the image if the user does not specify. It would be good to check this is happening! We should get 266 whether dx is given or not...

@rmandelb
Copy link
Member

There was some screwiness in _scale that Mike/Gary fixed in #44 and therefore it is fixed on master as of yesterday. I don't know if that will solve your particular issue, but it's definitely worth a try!

@barnabytprowe
Copy link
Member

hehe simultaneous problem-solving redundancy :)

@joergdietrich
Copy link
Member Author

No change:

In [1]:import galsim

In [2]:apsf = galsim.AtmosphericPSF(1)

In [3]:img = apsf.draw()

In [4]:img.array.shape
Out[4]:(994, 994)

In [5]:img = apsf.draw(dx=1.)

In [6]:img.array.shape
Out[6]:(266, 266)

@rmandelb
Copy link
Member

can you use getScale to check the scale parameters for those explicitly?

On May 23, 2012, at 3:51 PM, Jörg Dietrich wrote:

No change:

In [1]:import galsim

In [2]:apsf = galsim.AtmosphericPSF(1)

In [3]:img = apsf.draw()

In [4]:img.array.shape
Out[4]:(994, 994)

In [5]:img = apsf.draw(dx=1.)

In [6]:img.array.shape
Out[6]:(266, 266)

Reply to this email directly or view it on GitHub:
#105 (comment)


Rachel Mandelbaum
http://www.astro.princeton.edu/~rmandelb
rmandelb@astro.princeton.edu

@joergdietrich
Copy link
Member Author

In [7]:img = apsf.draw()

In [8]:img.getScale()
Out[8]:1.0

In [9]:img = apsf.draw(dx=1.)

In [10]:img.getScale()
Out[10]:1.0

Maybe this should go to its own issue now?

@barnabytprowe
Copy link
Member

Now, I think we need to use getScale() on the atmoimage defined in base.py... I'll take a look

@barnabytprowe
Copy link
Member

hmmm, think I've found it. will report back after lunch before opening the issue, would like to understand this!

@barnabytprowe
Copy link
Member

OK, I think I understand what is going on, and I don't think it's wholly a bug, but will explain (N.B. If @gbernstein or @rmjarvis are able to point to flaws in the following logic, I'd be grateful!)

When the SBInterpolatedImage draw() method is called without specifying a dx on input, one is chosen. This is done in src/SBProfile.cpp, as follows:

// Determine desired dx:
if (dx<=0.) dx = M_PI / maxK();

...using the maxK() of the SBProfile.

For SBInterpolatedImage objects, the value of maxK() depends not only on the scale in the lookup table image, but also on the the extent of the interpolant in k space. So, just as we discussed how the addition of the interpolating kernel increases the non-folding image size (and thus decreases stepK) by the real space support of the kernel, the extent of the interpolant in kspace impacts maxK: the default dx of the draw() method is chosen to sample the interpolated image, even those higher frequencies brought in purely by the interpolation kernel.

Specifically, the line that sets this is 82 in SBInterpolatedImage.h

// Notice that interpolant other than sinc may make max frequency higher than
// the Nyquist frequency of the initial image
double maxK() const { return xInterp->urange() * 2.*M_PI / dx; }

The urange methods return different things for different interpolants. For the Sinc function interpolation, urange() = 0.5, giving the classic Nyquist-Shannon result.

For the Lanczos, in this case the Lanczos5, there is actually a bit of calculation to be done using the cubic spline approximation to the Lanczos in Fourier space that SBProfile uses... The end result depends on the n order of the Lanczos, and on our chosen tol (1.e-4 in this case, which means that urange() >> 0.5). The return value for the Lanczos urange() method is actually done via the value of a stored attribute called uMax, which I think is calculated on intialization via a setup() function, in src/Interpolant.cpp around lines 80-90:

    void Lanczos::setup() 
    {
        // Reduce range slightly from n so we're not including points with zero weight in
        // interpolations:
        range = n*(1-0.1*std::sqrt(tolerance));
        const double uStep = 0.01/n;
        uMax = 0.;
        double u = tab.size()>0 ? tab.argMax() + uStep : 0.;
        while ( u - uMax < 1./n || u<1.1) {
            double ft = uCalc(u);
            tab.addEntry(u, ft);
            if (std::abs(ft) > tolerance) uMax = u;
            u += uStep;
        }
        u1 = uCalc(1.);
    }

We can see what gets chosen for your Lanczos5 with tol=1.e-4 by looking at the SBProfile attribute in Python:

In [1]: import galsim

In [2]: apsf = galsim.AtmosphericPSF(1)

then

In [4]: apsf.SBProfile.maxK()
Out[4]: 11.736990153811476

Since dx=1 is currently being hardwired we know that urange is being chosen as 11.736990153811476 / 2pi = 1.868, significantly larger than 0.5.

This means that the default dx chosen for drawing is not 1, but pi / 11.736990153811476 = 0.2676659528907921. This is the spacing that SBProfile determines as necessary to see all the modes in the image + interpolant down to the tol value we've selected.

Finally, as a sanity check, 266 / dx = 993.776, which rounds up to 994 as you found.

So I think the drawing at least is behaving how we expect.

However... I do think it would have been nice if this value of dx were passed from draw into the Image that gets drawn, so that it does not simply have it's default value 1 as we saw above. I think this can be traced back to around lines 45-50 in src/SBProfile.cpp:

//
// Common methods of Base Class "SBProfile"
//
    ImageView<float> SBProfile::draw(double dx, int wmult) const 
    {
        Image<float> img;
        draw(img, dx, wmult);
        return img.view();
    }

...the Image<float> img constructor is here totally empty, default.

This is a question again aimed I guess at Gary and Mike: is there a way to make sure this image has the correct dx and yet not afflict the actual drawing? My first guess would be simply to use an img.setScale(dx) method call before returning img.view() above, but does that cause any other conflicts later than you can think of?

@gbernstein
Copy link
Member

Your explanation of the genealogy of the dx used in the FFT looks correct to me.

But I don't understand your final question: what do you mean by the "correct" dx? Do you want the output Image() class's scale value to match the dx chosen by the FFT? Or do you want the image to be drawn with a preselected dx? I thought the code did both already in fact, the latter when you call draw() with a positive value of dx.

On May 23, 2012, at 6:22 PM, Barnaby Rowe wrote:

OK, I think I understand what is going on, and I don't think it's wholly a bug, but will explain (N.B. If @gbernstein or @rmjarvis are able to point to flaws in the following logic, I'd be grateful!)

When the SBInterpolatedImage draw() method is called without specifying a dx on input, one is chosen. This is done in src/SBProfile.cpp, as follows:

// Determine desired dx:
if (dx<=0.) dx = M_PI / maxK();

...using the maxK() of the SBProfile.

For SBInterpolatedImage objects, the value of maxK() depends not only on the scale in the lookup table image, but also on the the extent of the interpolant in k space. So, just as we discussed how the addition of the interpolating kernel increases the non-folding image size (and thus decreases stepK) by the real space support of the kernel, the extent of the interpolant in kspace impacts maxK: the default dx of the draw() method is chosen to sample the interpolated image, even those higher frequencies brought in purely by the interpolation kernel.

Specifically, the line that sets this is 82 in SBInterpolatedImage.h

// Notice that interpolant other than sinc may make max frequency higher than
// the Nyquist frequency of the initial image
double maxK() const { return xInterp->urange() * 2.*M_PI / dx; }

The urange methods return different things for different interpolants. For the Sinc function interpolation, urange() = 0.5, giving the classic Nyquist-Shannon result.

For the Lanczos, in this case the Lanczos5, there is actually a bit of calculation to be done using the cubic spline approximation to the Lanczos in Fourier space that SBProfile uses... The end result depends on the n order of the Lanczos, and on our chosen tol (1.e-4 in this case, which means that urange() >> 0.5). The return value for the Lanczos urange() method is actually done via the value of a stored attribute called uMax, which I think is calculated on intialization via a setup() function, in src/Interpolant.cpp around lines 80-90:

   void Lanczos::setup() 
   {
       // Reduce range slightly from n so we're not including points with zero weight in
       // interpolations:
       range = n*(1-0.1*std::sqrt(tolerance));
       const double uStep = 0.01/n;
       uMax = 0.;
       double u = tab.size()>0 ? tab.argMax() + uStep : 0.;
       while ( u - uMax < 1./n || u<1.1) {
           double ft = uCalc(u);
           tab.addEntry(u, ft);
           if (std::abs(ft) > tolerance) uMax = u;
           u += uStep;
       }
       u1 = uCalc(1.);
   }

We can see what gets chosen for your Lanczos5 with tol=1.e-4 by looking at the SBProfile attribute in Python:

In [1]: import galsim

In [2]: apsf = galsim.AtmosphericPSF(1)

then

In [4]: apsf.SBProfile.maxK()
Out[4]: 11.736990153811476

Since dx=1 is currently being hardwired we know that urange is being chosen as 11.736990153811476 / 2pi = 1.868, significantly larger than 0.5.

This means that the default dx chosen for drawing is not 1, but pi / 11.736990153811476 = 0.2676659528907921. This is the spacing that SBProfile determines as necessary to see all the modes in the image + interpolant down to the tol value we've selected.

Finally, as a sanity check, 266 / dx = 993.776, which rounds up to 994 as you found.

So I think the drawing at least is behaving how we expect.

However... I do think it would have been nice if this value of dx were passed from draw into the Image that gets drawn, so that it does not simply have it's default value 1 as we saw above. I think this can be traced back to around lines 45-50 in src/SBProfile.cpp:

//
// Common methods of Base Class "SBProfile"
//
   ImageView<float> SBProfile::draw(double dx, int wmult) const 
   {
       Image<float> img;
       draw(img, dx, wmult);
       return img.view();
   }

...the Image<float> img constructor is here totally empty, default.

This is a question again aimed I guess at Gary and Mike: is there a way to make sure this image has the correct dx and yet not afflict the actual drawing? My first guess would be simply to use an img.setScale(dx) method call before returning img.view() above, but does that cause any other conflicts later than you can think of?


Reply to this email directly or view it on GitHub:
#105 (comment)

@barnabytprowe
Copy link
Member

The issue is that, when drawn using the default-chosen dx=0.0.2676659528907921 the output image reports

In [7]:img = apsf.draw()

In [8]:img.getScale()
Out[8]:1.0

when in the physcial units used throughout the apsf instance, it's scale is actually 0.2676... This is I think the first of the two options you mention, and I don't think this is happening at the moment.

@gbernstein
Copy link
Member

True enough. But it looks like the setScale is called there too.

On May 23, 2012, at 7:29 PM, Barnaby Rowe wrote:

(P.S. Not really related, but Joerg is using an SBInterpolatedImages and I thought these were drawn using realDraw(), since they have analyticX=True...?)


Reply to this email directly or view it on GitHub:
#105 (comment)

@barnabytprowe
Copy link
Member

Hmmm... Yes, it gets set by what comes out of the fillXImage method here:

// TODO: If we decide not to keep the scale, then can switch to simply:
// return fillXImage(I.view(), dx);
// (And switch fillXImage to take a const ImageView<T>& argument.)
ImageView<T> Iv = I.view();
double ret = fillXImage(Iv, dx);
I.setScale(Iv.getScale());
return ret;

I found this in src/SBInterpolatedImage.cpp:

    // Returns total flux
    template <typename T>
    double SBInterpolatedImage::fillXImage(ImageView<T>& I, double dx) const 
    {
        if ( dynamic_cast<const InterpolantXY*> (xInterp)) {
            double sum=0.;
            for (int ix = I.getXMin(); ix <= I.getXMax(); ix++) {
                for (int iy = I.getYMin(); iy <= I.getYMax(); iy++) {
                    Position<double> x(ix*dx,iy*dx);
                    T val = xValue(x);
                    sum += val;
                    I(ix,iy) = val;
                }
            }
            return sum;
        } else {
            // Otherwise just use the normal routine to fill the grid:
            // Note that we need to call doFillXImage, not fillXImage here,
            // to avoid the virtual function resolution.
            return SBProfile::doFillXImage(I,dx);
        }
    }

Is it possible that the first case is being switched into above, since I don't see a setScale there. Going to look at the Python now!

@gbernstein
Copy link
Member

Yes, I think you've got it: that override does not call setScale in the first branch.

On May 23, 2012, at 7:47 PM, Barnaby Rowe wrote:

Hmmm... Yes, it gets set by what comes out of the fillXImage method here:

// TODO: If we decide not to keep the scale, then can switch to simply:
       // return fillXImage(I.view(), dx);
       // (And switch fillXImage to take a const ImageView<T>& argument.)
       ImageView<T> Iv = I.view();
       double ret = fillXImage(Iv, dx);
       I.setScale(Iv.getScale());
       return ret;

I found this in src/SBInterpolatedImage.cpp:

   // Returns total flux
   template <typename T>
   double SBInterpolatedImage::fillXImage(ImageView<T>& I, double dx) const 
   {
       if ( dynamic_cast<const InterpolantXY*> (xInterp)) {
           double sum=0.;
           for (int ix = I.getXMin(); ix <= I.getXMax(); ix++) {
               for (int iy = I.getYMin(); iy <= I.getYMax(); iy++) {
                   Position<double> x(ix*dx,iy*dx);
                   T val = xValue(x);
                   sum += val;
                   I(ix,iy) = val;
               }
           }
           return sum;
       } else {
           // Otherwise just use the normal routine to fill the grid:
           // Note that we need to call doFillXImage, not fillXImage here,
           // to avoid the virtual function resolution.
           return SBProfile::doFillXImage(I,dx);
       }
   }

Is it possible that the first case is being switched into above, since I don't see a setScale there. Going to look at the Python now!


Reply to this email directly or view it on GitHub:
#105 (comment)

@barnabytprowe
Copy link
Member

OK, going to test there, will let you know (didn't see much in the Python!)

@barnabytprowe
Copy link
Member

Yes! That fixes it:

In [1]: import galsim

In [2]: apsf = galsim.AtmosphericPSF(1)

In [4]: im.getScale()
Out[4]: 0.2676659528907921

Joerg, are you happy with this default behaviour?

@barnabytprowe
Copy link
Member

(If so, I might make this one line fix on master and document it there with a comment that refers to this issue).

@rmandelb
Copy link
Member

I agree about fixing on master.

On May 23, 2012, at 8:27 PM, Barnaby Rowe wrote:

(If so, I might make this one line fix on master and document it there with a comment that refers to this issue).


Reply to this email directly or view it on GitHub:
#105 (comment)


Rachel Mandelbaum
http://www.astro.princeton.edu/~rmandelb
rmandelb@astro.princeton.edu

@rmandelb
Copy link
Member

(not sure I like that default behavior, but that can be hashed out here, whereas the bug fix should go on master once we make sure it doesn't break some test which we should also fix.)

@barnabytprowe
Copy link
Member

It doesn't break scons, scons examples or scons tests so far. I'll try running the demos...

@rmandelb
Copy link
Member

Sounds like you're probably okay then. But... we sould think about the fact that we didn't have a unit test that uncovered this!

On May 23, 2012, at 8:40 PM, Barnaby Rowe wrote:

It doesn't break scons, scons examples or scons tests so far. I'll try running the demos...


Reply to this email directly or view it on GitHub:
#105 (comment)


Rachel Mandelbaum
http://www.astro.princeton.edu/~rmandelb
rmandelb@astro.princeton.edu

@barnabytprowe
Copy link
Member

Runs on demos on this branch. Will put on master and comment (have to run home now).

For the unit tests, what do you think? Store values of dx that we know are right (e.g. the one above) for various interpolants to use as regression tests? Or simply check for dx < 1 (for all non-sinc interpolants) on the basis that we can't do better than sinc?

Struck me from the discusion above, this behaviour (i.e. the adding of a kernel interpolant's extended k-space support to maxK, whether or not this is then used to set the default dx) is non-trivial to describe, we'll have to make sure the documentation is good on this!

@rmandelb
Copy link
Member

I think it would make sense to start a new issue for this question of unit tests, as others who have played with the code and might not be following this issue could have some useful ideas. I agree with you about documentation.

@joergdietrich
Copy link
Member Author

I think I like this behavior.

On Wed, May 23, 2012 at 05:27:32PM -0700, Barnaby Rowe wrote:

Yes! That fixes it:

In [1]: import galsim

In [2]: apsf = galsim.AtmosphericPSF(1)

In [4]: im.getScale()
Out[4]: 0.2676659528907921

Joerg, are you happy with this default behaviour?


Reply to this email directly or view it on GitHub:
#105 (comment)

Jörg Dietrich jorgd@umich.edu http://www-personal.umich.edu/~jorgd/
Physics Department, University of Michigan, Tel. +1 734 615 4256
450 Church Street Fax +1 734 763 9694
Ann Arbor, MI 48109-1040, USA

@barnabytprowe
Copy link
Member

Hi all. I merged the pull request for this branch, closing the issue. Many thanks!

@barnabytprowe
Copy link
Member

So as mentioned above, I merged with master and very stupidly didn't do last checks. We're getting this test failure:

..F..
======================================================================
FAIL: Test some basic properties of a known Atmospheric PSF.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Library/Frameworks/EPD64.framework/Versions/7.1/lib/python2.7/site-packages/nose/case.py", line 187, in runTest
    self.test(*self.arg)
  File "/Users/browe/great3/GalSim/tests/test_atmosphere.py", line 61, in test_AtmosphericPSF_properties
    err_msg="Atmospheric PSF .stepk() does not return known value.")
  File "/Library/Frameworks/EPD64.framework/Versions/7.1/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal
    raise AssertionError(msg)
AssertionError: 
Arrays are not almost equal to 9 decimals
Atmospheric PSF .stepk() does not return known value.
 ACTUAL: 0.1533148336249943
 DESIRED: 0.3066296672499886

----------------------------------------------------------------------
Ran 5 tests in 1.501s

FAILED (failures=1)

Currently this means that some tests are failing on master, apologies again!

@rmjarvis
Copy link
Member

rmjarvis commented Jun 7, 2012

I went ahead and fixed this on master. There was also a python 2.6 incompatibility (tuple - tuple) which I fixed. I don't think this merits another pull request.

@rmjarvis rmjarvis closed this as completed Jun 7, 2012
@barnabytprowe
Copy link
Member

Great, thanks Mike!

@rmjarvis rmjarvis added base classes Related to GSObject, Image, or other base GalSim classes feature request Request for a new feature in GalSim and removed module labels Nov 21, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
base classes Related to GSObject, Image, or other base GalSim classes feature request Request for a new feature in GalSim
Projects
None yet
Development

No branches or pull requests

5 participants