Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-13065: Add low-level polynomials library. #8

Merged
merged 1 commit into from Aug 23, 2018
Merged

Conversation

TallJimbo
Copy link
Member

No description provided.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my first batch of comments. Most are very minor, but the last one is not and I hope we can chat briefly about it before I go on.

* They are intended to be used as the building blocks of higher-level
* objects that are visible to Python users and persisted with our data
* products, such as ChebyshevBoundedField and the math::Function
* hierarchy (DM-13156 and DM-13157, respectively).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a namespace for ChebyshevBoundedField.

Also consider omitting the ticket numbers (unless they are very informative) as they may not age gracefully.

* For both 1-d and 2-d, the library contains the following kinds of objects:
*
* - Basis1d and Basis2d: objects that evaluate basis functions at one
* point at a time. The fundamental 1-d basis is the RecurrenceBasis1d
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please reword "at one point at a time". Would "at a point" suffice? If not, perhaps something along the lines of "at one point per call"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about "at individual points"?

*
* - Basis1d and Basis2d: objects that evaluate basis functions at one
* point at a time. The fundamental 1-d basis is the RecurrenceBasis1d
* template, while the fundamental 2-d basis is the PackedBasis2d template.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this hard to follow. What do you mean by "the fundamental 1-d basis is..."? It may help to say here that Basis1d and Basis2dm are concepts and RecurrenceBasis1d and PackedBasis2d are implementations of that concept.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely reworded; please see if the new text makes more sense to you.

*
* - PackedIndexIterator and PackedIndexRange, which handle the flattening
* of pairs of 1-d polynomial coefficients and basis functions into 1-d
* arrays.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this very confusing. I think you basically mean that these flatten coefficients for a 2-d polynominal into a 1-d array (two 1-d arrays?), but the text says nothing about 2-d and could easily be mis-read to mean that there is a separate Basis1d for each coefficient. Please reword.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely reworded; please see if the new text makes more sense to you.

* double a = values[index.flat];
* // standard polynomial basis functions are just powers
* double b = std::pow(point.getX(), index.x)*std::pow(point.getY(), index.y)];
* assert(std::pow(a - b) < std::numeric_limits<double>::epsilon());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want std::abs not std::pow. The same text is present in PackedBasis2d.h

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which brings up a question...is this method really wanted on Basis2d? It seems to only be useful for PackedBasis2d.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it seems you already came to the same conclusion I just did when thinking about the previous comment. Removed.

Scaled scale(Scaling2d const & first) const;

/// Return the flattened index of the basis function with the given x and y orders.
std::size_t index(std::size_t x, std::size_t y) const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest reserving "x" and "y" for values; consider xorder and yorder instead.

Also it would help to have some idea of what this is used for and, if your code has a standard order, what that order is (which could be a reference, so it can be described in detail just once).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more I think about it, the less I think this method (or the IndexRange typedef, or getIndices()) should even exist on Basis2d (as opposed to PackedBasis2d, where I think it still makes sense). Any Basis2d that isn't a PackedBasis2d is probably polar, or at least not something for which a combination of x- and y- indices has a clear meaning. I'm just going to remove this.

// -*- LSST-C++ -*-
/*
* LSST Data Management System
* Copyright 2008-2018 LSST/AURA.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update this and all other copyrights to reference the COPYRIGHT file in the root of the package.

// when constructing a BinomialMatrix with nMax higher than we have seen
// before, and wrap that expansion in a mutex for thread safety.
// Reads from the matrix do not require going through the mutex.
static Eigen::MatrixXd & getMatrix();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make any sense to just pre-compute all values on construction? I have no idea how big you expect n to get.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recompute all of them whenever a new BinomialMatrix instance is constructed, you mean? I think that'd probably be okay, performance-wise, because that still gives the user an opportunity to construct a big matrix once and use the values repeatedly in any inner loops. But is it better than the caching I have here, now that's implemented? I don't think it'd be faster, and I don't think this is too complex.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was confused about the code, but it makes sense now. Yes, I would personally just use one matrix per instance, as the current code feels like premature optimization. But as you say, the complexity isn't too bad and it will behave better in tight loops, so yes it seems reasonable.

*
* @param[in] order Maximum order of the basis (inclusive).
* @param[in] box Box to be mapped to [-1, 1]x[-1, 1]
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest noting that this is intended for use by ScaledChebyshev1Function2d to avoid users constructing one of these when what they wanted was the function. If there was a simple way to hide this from the public interface it would be worth considering, but I doubt that can be done in a reasonable way.

Similarly for all the other function-specific basis classes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Basis objects are quite useful in their own right. If you want to fit a Chebyshev function, for instance, you'd actually start with a Basis object, use that to construct a matrix (each row would be the populated by a call to Basis2d::fill), pass that to a least-squares solver along with a data vector, and only after obtaining the coefficients would you be able to make a Function object that combines the Basis with the coefficients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. However, I would still appreciate a reference to the appropriate Chebyshev polynomial class.

template <typename Basis>
Function1d<Basis> makeFunction1d(Basis const & basis, Eigen::VectorXd const & coefficients) {
return Function1d<Basis>(basis, coefficients);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explain why one would call these factory functions instead of using the Function1d constructor. Similarly for the other overload and the two overloads of makeFunction2d. Are these intended for end users?

I'm hoping users can do this:

auto cheby = ScaledChebyshev1Function2d(coefficients, min, max);

but I fear it'll be this, instead:

auto cheby = ScaledChebyshev1Function2d(ScaledChebyshev1Function2d::Basis(order, min, max), coefficients);

That is certainly acceptable, but it has stutter and order can be determined from the length of the coefficients vector.

Copy link
Member Author

@TallJimbo TallJimbo Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid the normal construction pattern with the constructors would look even worse than your fears (at least for the scaled case):

auto cheby = ScaledChebyshev1Function1d(
    ScaledChebyshev1Basis1d(
        Chebyshev1Basis1d(order),
        makeUnitRangeScaling1d(min, max)
    ),
    coefficients
);

and that's why these free functions exist, so the user can write e.g.

auto cheby = makeFunction1d(makeScaledChebyshev1Basis1d(order, min, max), coefficients);

which is only as bad as your fears.

I agree this is not ideal, but the cures I could think of seemed worse than the disease:

  • To make the free-function construction pattern doable via constructors, I think I'd need to make all of those typedefs into concrete subclasses so they could have custom constructors (at least down to a factor of two of the current number of typedefs - I think the constructors for Polynomial and Chebyshev1 branches would still have the same signatures).

  • Removing the order argument because we can infer that from the coefficients vector works well for 1-d, but it's a bit unpleasant in 2-d because there are 2-d coefficient vector sizes that don't map to any order, and doing the inversion involves converting from integer to floating-point and back (as it involves a sqrt). So it's not impossible, but I liked not doing it slightly better.

I'm happy to get your opinions on both of those questions. I also think things could be improved perhaps by adding more free functions with customized signatures. The above case, for example, could certainly be simplified to:

auto cheby = makeScaledChebyshev1Function1d(order, coefficients, min, max);

or

auto cheby = makeScaledChebyshev1Function1d(coefficients, min, max);

...if we're willing to infer orders from coefficient vector sizes.

It may also be worth pointing out that this does get back a bit to the issue of Basis objects being important in their own right. My usual pattern is actually:

  • make a Basis object;
  • fit for the coefficients;
  • make a Function object;

so from that perspective, needing to pass a fully-constructed Basis object to make a Function isn't onerous, because I have one anyway.

Copy link
Contributor

@r-owen r-owen Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the explanation. I am fine with order being explicit and @parejkoj prefers it, as you do.

If it is practical (which I doubt) to add this constructor: ScaledChebyshev1Basis1d(order, min, max), please consider that instead of the make function. Either way I think we can live without makeScaledChebyshev1Function1d for now. We can easily add it later if there is sufficient demand.

Copy link
Member Author

@TallJimbo TallJimbo Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ScaledChebyshev1Basis1d(order, min, max)

This (and its 2-d counterpart) actually weren't too bad. I've added them, and I think that actually removes the need for the makeScaledChebyshev1BasisX functions. I'll go ahead and remove those.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another batch of small suggestions.

_basis(basis),
_coefficients(coefficients)
{
assert(basis.size() == static_cast<std::size_t>(_coefficients.size()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I beg you to throw an exception instead of asserting, as it seems like it could be a common user error to pass in the wrong number of coefficients. assert seems more suitable to unrecoverable problems due to internal errors.

If you change this please look for it elsewhere as well, as there are more instances.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that a low-level C++ library like this one should declare its preconditions and assert them rather than handling bad inputs from the caller via exceptions. In other words, passing the wrong number of coefficients is considered an unrecoverable error in the calling code. (If you think about it, unless "calling code" is something like an interactive Python prompt, passing the a coefficient vector of the wrong size isn't actually recoverable).

This attitude towards exceptions is of course not an acceptable one for Python code, and that's one of the reasons I don't think it's appropriate to wrap these (directly) to Python, but it seems to be the direction C++ is moving (albeit with a new way to express asserts coming in C++20).

* functions of a PackedBasis2d with order @f$N@f$ are @f$B_m(x)B_n(y)@f$ for all combinations
* with @f$m + n \le N@f$.
*
* The ordering of the products of 1-d basis functions in the 2-d basis is defined by
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest changing "the 2-d basis" to "this 2-d basis" to help emphasize that the order is fixed for PackedBasis2d but not all Basis2d implementations.

* The scaled basis will transform all points by the given scaling
* before evaluating the basis functions in the same way as `this`.
*/
Scaled scale(Scaling1d const & scaling) const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly suggest you call this method scaled instead of scale to make it clearer that it returns a new scaled instance, instead modifying this. Also that names is usefully more symmetrical with the unscaled function, which makes it easier to predict one name if you know the other.

/// Return the coefficient with the given (possibly flattened) index.
double getCoefficient(std::size_t i) const { return _coefficients[i]; }

/// Return the coefficient with the given (possibly flattened) index.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Return" -> "Set"

return _basis.sumWith(point, _coefficients, workspace);
}

/// Return the coefficient with the given (possibly flattened) index.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please state whether the index is checked. I realize you say no in an introduction somewhere, but it'd be nice to have a reminder on methods that take an index if it's not too much trouble.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider adding a size method or similar, so the user can easily determine the number of coefficients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider adding getCoefficients and setCoefficients methods that return/take Eigen::VectorXd

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added size and a note that the caller is responsible for ensuring indices are valid.

I've replaced getCoefficient() and setCoefficient with operator[], iterator accessors, and a const-overloaded getCoefficients() method that returns an Eigen view of unspecified type.

* double a = values[index.flat];
* // standard polynomial basis functions are just powers
* double b = std::pow(point.getX(), index.x)*std::pow(point.getY(), index.y)];
* assert(std::pow(a - b) < std::numeric_limits<double>::epsilon());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::pow -> std::abs. Please look for this elsewhere, also.

return IndexRange(IndexRange::iterator(), IndexRange::iterator::makeEnd(getOrder()));
}

/// Allocate workspace that can be passed to sumWith() and fill() to avoid repeated memory allocations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider "Allocate a workspace" here and elsewhere. I think it scans a bit better.

* Return a range of iterators that dereference to Index2d.
*
* This provides the most natural way to interpret the packed coefficients and basis functions
* utilized by PackedBasis2d; for example,
Copy link
Contributor

@r-owen r-owen Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "the most natural way"? Does this add useful information beyond "a natural way"?

Copy link
Contributor

@r-owen r-owen Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be helpful to briefly say something in this documentation block about the order of the coefficients and bases, e.g. that the order is controlled by the index iterator class, such as PackedIndexIterator, since that seems to be the standard available class.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what I was trying to say that this (getting an IndexRange object via getIndices() and iterating over it) is the recommended way to interpret the packed coefficients. I suppose I should just go ahead and say exactly that.

I'll add a doc link to PackedIndexIterator.

#include "Eigen/Core"
#include "lsst/geom/polynomials/ScaledBasis2d.h"

namespace lsst { namespace geom { namespace polynomials {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please run clang-format on the code if you are at all willing. It makes life a lot easier for those of us that like to run it when updating code and the format is good. I'm sure you know this, but it can be run from the command line on all the code in one pass (requiring a command-line argument to update the files in place).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid I would rather not. I don't dislike clang-format's outputs enough to want to stop others from using it, but I do occasionally dislike them enough to not want to use it on code I've already formatted to be standards-compliant and readable.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more comments

*/
template <typename Vector>
double sumWith(geom::Point2D const & point, Vector const & coefficients, Workspace & workspace) const {
assert(workspace.getOrder() >= getOrder());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even here I suggest an exception, as user error can cause the problem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(best to just discuss this on the other thread on this topic; I'll make sure this instance is consistent with that one)

*/
struct Index2d {

/// Construct an index with zero entrires.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"entrires" -> "entries"

constexpr Index2d() noexcept : flat(0), x(0), y(0) {}

/// Construct with the provided values.
constexpr Index2d(std::size_t flat_, std::size_t x_, std::size_t y_) noexcept :
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still uneasy about using x/y to indicate the index or order of x/y. Please consider ix/iy, i/j, xorder/yorder, xindex/yindex...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to nx/ny.

}

/// Return the flattened size of an expansion with the given maximum order (inclusive).
static constexpr std::size_t computeSize(std::size_t order) noexcept {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making computeSize a static member function of PackedBasis2d or a free function, since I would expect it to be common to all packed 2-d indices.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears this didn't happen. It's a bit less needed now that you have the order trait, but it's still worth considering. If you do it then you probably want to write computeOffset in terms of computeOrder instead of the other way around.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for bringing this back up two months later, but I don't understand - PackedBasis2d::computeSize does now exist (it just delegates to the version on PackedIndexRange).

* A pair of indices @f$(p, q)@f$ is mapped to the flattened position
* @f$i = (x+y)(x+y+1)/2 + x@f$, which yields the (x, y) ordering
* ```
* (0, 0), (0, 1), (1, 0), (1, 1), (0, 2), (1, 1), (2, 0), ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. How you would envision supporting the other obvious order -- incrementing the y axis first? It looks to me as if it would require a copy of PackedIndexIterator, that it could not be a constructor flag on this class, because some of the relevant methods are static. I'm wondering how you feel about that. It makes me a bit uneasy, but it's probably safer than letting users shoot themselves in the foot by getting a flag wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be done by making PackedIndexIterator and PackedIndexRange templates with an enum class template parameter, and then making PackedBasis2d templated on that as well. That's easy enough, and I'll go ahead and do it, but it raises the question of what to do about the various typedefs to PackedBasis2d:

  • should I make them template typedefs, so one has to write e.g. ScaledChebyshev1Basis2d<X_FIRST>?
  • or should I define the typedefs to use a default order, forcing those who don't want to use the default to use the original template classes instead of the typedefs?

I lean towards the latter.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I lean towards requiring a template parameter. This is low level code and I feel that the order isn't obvious enough to use a default (we've run into trouble in the past with defaults that surprise users, i.e. "when in doubt refuse to guess" and "explicit is better than implicit").

I also suggest making typedefs for the two options, e.g. ScaledChebyshev1Basis2dXFirst. That doesn't save much typing but saves the user from wondering what template parameters are allowed.

I can't think of any other reasonable orders. If you can then perhaps the class responsible for the order should be the template parameter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other reasonable orders I can think of are those that look like reading the rows or columns of a triangular matrix, e.g.:

(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (2, 0)

or

(0, 0), (1, 0), (2, 0), (0, 1), (1, 1), (0, 2)

Those have the obvious disadvantage that you can't use a slice of a higher-order coefficient array to make a coefficient array for a lower-order expansion, and as a result I hope we don't ever need to support them. But I think I've seen them used in external code (certainly for representations of triangular matrices).

* a contiguous subset of the coefficients for an (n+1)th-order expansion.
*
* PackedIndexIterator is an STL input iterator, *not* a forward iterator;
* incrementing the iterator invalidates all previously-dereferenced values.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explain somewhere in this doc why this class does not support pre-increment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does support both pre-increment and post-increment. What it doesn't support is:

PackedIndexIterator iter;
auto const & v = *iter;
++iter;
[do something with v]

That's because for this iterator, v points to something inside the iterator, and it's already been updated to point to a new element after ++iter. If you'd done this with a real ForwardIterator, such as std::vector's:

auto iter = vector.begin();
auto const & v = *iter;
++iter;

that v would still be a reference to the first element.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the explanation.

/// Return the flattened index for the element with the given x and y orders.
static constexpr std::size_t computeIndex(std::size_t x, std::size_t y) noexcept {
return iterator::computeIndex(x, y);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are the three methods above duplicated? I would expect you could have them in just one class or the other.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just convenience; PackedIndexIterator is the one that does all of the real work and the one that's depended on, so it needs to have them because it provides (and uses) the implementations, while PackedIndexRange is expected to be the type external users interact with (with the iterator, as usual, usually being treated as an opaque type that just implements an iterator interface).

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more suggestions. Getting close on this pull request.

* @note This class is only present in the documentation, as it represents an
* abstract interface for which C++ (prior to C++20, at least) has no
* language support. It will be formalized into a true Concept when
* that language feature is available.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest not promising that we will do this. Concepts may arrive too late for us to bother, and even if it is available soon we may have higher priorities. Consider "It could be" or "it should be" instead of "It will be" (here and elsewhere).

* Construct a ScaledPolynomialBasis1d that remaps the given interval to [-1, 1] before
* evaluating polynomials.
*
* @param[in] order Maximum order of the basis (inclusive).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for "(inclusive)".

* This operation is not numerically stable at high order, but when fitting
* standard polynomials, it is still much more stable to first fit in a scaled
* basis and then (if necessary) use `unscaled` to compute the coefficients
* of an equivalent unscaled polynomial.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice note!

* Evaluate the basis at a given point.
*
* @param[in] x Point at which to evaluate the basis functions.
* @param[in] basis Output vector. May be any type for which
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another [in] that needs fixing

* Evaluate the basis at a given point.
*
* @param[in] x Point at which to evaluate the basis functions.
* @param[in] basis Output vector. May be any type for which
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another [in] that needs fixing

/**
* Return a range of iterators that dereference to Index2d.
*
* This provides the most natural way to interpret the packed coefficients and basis functions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another "the most natural way" that should perhaps be "a natural way"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to "the recommended way".

* double a = values[index.flat];
* // standard polynomial basis functions are just powers
* double b = std::pow(point.getX(), index.x)*std::pow(point.getY(), index.y)];
* assert(std::pow(a - b) < std::numeric_limits<double>::epsilon());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another std::pow that should be std::abs

* require a larger-scale rethink of geom, however, and at present
* we don't actually have a use case for that interoperability, so it's
* something we should keep in mind for the future, not a high priority
* for the present.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting point

/**
* Return a Scaling1d that maps the interval [min, max] to [-1, 1].
*/
inline Scaling1d makeUnitRangeScaling1d(double min, double max) noexcept {
Copy link
Contributor

@r-owen r-owen Jul 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest makeScaling1d. I don't think "UnitRange" adds useful information and I find it confusing. Similarly for the 2d version below. The fact that Scaling1d is affine seems to already tell the user what they need to know. If you do ever have non-linear scalings you can use fancier names for those and the associated make functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I think I see why you did it and I withdraw the comment. It wasn't initially clear to me that the term "unit range" meant [-1 to 1].

* `t.applyInverse(x)` and `r.applyInverse(y)` is equivalent to
* `t.applyForward(y)`.
*/
Scaling1d invert() const noexcept {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly suggest renaming this getInverse, both to match Transform and so that the name does not appear to modify this. Similarly for the 2d case. If you hate that name then consider inverted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, both getInverse and inverted are indeed much better than invert, but invert is what LinearTransform and AffineTransform have. My preference would be inverted, but I don't want blithely go in the opposite direction of Transform.

I'm happy to deprecate LinearTransform::invert and AffineTransform::invert (but not change everything downstream now) in favor of one of the others - how would you feel about also switching Transform to inverted?

Copy link
Contributor

@r-owen r-owen Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that inverted is somewhat nicer. I am willing to make the change (though I'm entirely sure it's worth the time). I would also want to change it in astshim (not a bit deal). It will require a simple change in about a dozen packages, but that should only take half a day or so. I propose to also change "simplify" to "simplified" at the same time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I filed RFC-500 https://jira.lsstcorp.org/browse/RFC-500. I hope we can settle that in time to decide what to do here.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks very nice -- clean code that is well documented and well tested. I do have some requested changes, but nothing big. I'm afraid it's a lot for one reviewer and I may well have missed some things. I would like one more look before merging. I have also asked @kfindeisen to take a brief look at the big picture, if he can find the time.

*
* @exceptsafe Most operations on Scaling2d are `noexcept`, but those
* that interact with geom objects can't be marked as such
* because those objects haven't been updated for C++11/14.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is outdated. @kfindeisen updated geom to add noexcept where he could. Please look at what he did and figure out which additional methods can be marked noexcept.

I didn't see anything like this anywhere else, but I'd appreciate your thinking about that.

// when constructing a BinomialMatrix with nMax higher than we have seen
// before, and wrap that expansion in a mutex for thread safety.
// Reads from the matrix do not require going through the mutex.
static Eigen::MatrixXd & getMatrix();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was confused about the code, but it makes sense now. Yes, I would personally just use one matrix per instance, as the current code feels like premature optimization. But as you say, the complexity isn't too bad and it will behave better in tight loops, so yes it seems reasonable.

*
* @param[in] order Maximum order of the basis (inclusive).
* @param[in] box Box to be mapped to [-1, 1]x[-1, 1]
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. However, I would still appreciate a reference to the appropriate Chebyshev polynomial class.


#include <vector>
#include <typeinfo>
#include "Eigen/Core"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include <limits> since you are using std::numeric_limits: https://en.cppreference.com/w/cpp/types/numeric_limits

// Test that we can call sumWith on:
// 1) STL random-access containers
std::vector<double> const coefficients1(coefficients);
double z1 = basis.sumWith(point, coefficients1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you copy coefficients? It's const and so is the copy coefficients1, so why not just pass coefficients directly into sumWith? Similarly for the coefficients2. If it's just naming (each case should have its own name) then you could name based on functionality, e.g. coefficientsEigen and coefficientsEigenExpr.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it was just history; at one point I think I was passing coefficients as some other type. Will fix.

CUSTOM_CHECK_CLOSE(toUnitRange.applyForward(max), 1.0, DEFAULT_RTOL);
CUSTOM_CHECK_CLOSE(toUnitRange.applyInverse(-1.0), min, DEFAULT_RTOL);
CUSTOM_CHECK_CLOSE(toUnitRange.applyInverse(1.0), max, DEFAULT_RTOL);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test for Scaling2d and makeUnitRangeScaling2d`

BOOST_AUTO_TEST_CASE(scalings1d) {
Scaling1d affine(2, -0.5);
auto inverse = affine.invert();
auto identity = affine.then(inverse);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check that affine.getScale() matches your requested scale, and similarly for affine.getShift(). In order to do this I suggest using named constants (such as shift and scale) for your values.


// scaled twice
testScaledBasis(makeScaledPolynomialBasis1d(order, min, max), point, coefficients, scaling);
testScaledBasis(makeScaledChebyshev1Basis1d(order, min, max), point, coefficients, scaling);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more code comments would be appreciated here. I don't know what this is supposed to do. In particular, what value should point have by the time the Polynomial and Chebyshev bases see it? Similarly for thebasis2d test below.

Would you expect multiple scalings to increase roundoff errors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really just to test ScaledBasis1d.scaled(...) and the thing it returns (which should be another ScaledBasis1d, with the scalings composed). The actual values of the points and coefficients. I've added a comment to that effect.


// scaled once
testScaledBasis(PolynomialBasis1d(order), point, coefficients, scaling);
testScaledBasis(Chebyshev1Basis1d(order), point, coefficients, scaling);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please test that the scaled value of point is what you expect it to be. This will show that it is in the range [-1, 1], which would be appreciated for this test. Similarly for thebasis2d test below.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid this point won't actually be in the range [-1, 1], because we actually apply the second scaling (which does not map to unit range) after applying the one that does. That's not actually something one would do in non-test code; it's just a convenient way to construct and compose two different scalings, which is all I wanted to do here.

);
auto func = unscaled(sfunc);
for (double x = min; x < max; x += 0.3) {
CUSTOM_CHECK_CLOSE(sfunc(x), func(x), 2*DEFAULT_RTOL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused. I thought scaling applied to the input values, in which case I would expect the necessary x to be different for sfunc and func. Similarly for the 2d case, and that factor of 100 on RTOL is rather alarming.

Copy link
Member Author

@TallJimbo TallJimbo Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The factor of 100 on rtol is related to the lack of numerical stability noted at https://github.com/lsst/geom/pull/8/files#diff-af047e9f6ac30cc50e40cd0dbffe6dd7R40.

I think your confusion about what's going on here indicates that unscaled is not a good name; in any case, it is not the opposite of the scaled method on the Basis objects:

  • The scaled method takes an input Basis and returns a new basis that transforms all inputs, which is in no way the same as the original Basis.
  • The unscaled function takes an input Function that uses a ScaledBasis, and returns an equivalent Function with different coefficients that uses a non-Scaled Basis.

I'll think about alternate names for unscaled, but please let me know if you come up with one.

Copy link
Member

@kfindeisen kfindeisen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, but this goes way beyond my knowledge of generic programming -- all I can really say about the design is that the user seems to need to know about an awful lot of classes. Maybe somebody who's more familiar with template oriented programming or the C++-specific techniques you use would be able to offer better feedback?

/**
* @namespace lsst::geom::polynomials Low-level polynomials (including special polynomials) in C++.
*
* The math::polynomials library provides low-level classes for
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean math:: instead of geom::?

* allows a conceptual 2-d triangular matrix to be implemented on top of
* a 1-d array.
*/
namespace polynomials {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't this a DMTN-30-compliant sphinx page?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's documenting pure C++ code, so if we did this with Sphinx we wouldn't get any of Doxygen's nice links to code objects. I'm hoping once we have a Doxygen-Sphinx bridge a Doxygen namespace page would just be automatically translated into a "Module" page, while the doc source could stay in more C++-friendly Doxygen. I think the content here is basically just DMTN-030's "In depth" subsection, but all of the other relevant subsections for "Module" seem like they'd ideally be machine-generated.

@jonathansick is that at least not inconsistent with your vision?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK Doxygen namespace docs will not be automatically translated to anything. Given how hard it has proven to get them right in a heterogeneous project (lsst, detection vs. detection), I think this is a very good thing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think a Doxygen @page would work better than namespace documentation, then?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that suffers from the same problem, especially since the Doxygen "related pages" concept doesn't scale well to multi-package projects. (For example, half the existing pages are specific to astshim, and most of the rest are from afw, but they're all mixed together.)

* - Basis1d and Basis2d: objects that evaluate basis functions at
* individual points. The only concrete Basis1d implementation provided
* at present is RecurrenceBasis1d template class, which can be used (with
* a the recurrence relation as a template parameter) to implement most
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo

* at present is RecurrenceBasis1d template class, which can be used (with
* a the recurrence relation as a template parameter) to implement most
* special polynomials. Recurrence relations for standard polynomials and
* Chebyshev polynomials of the first kind are provided here. The
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provided where? How do I find them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added parenthetical links to the classes.

* PackedBasis2d template provides an implementation of Basis2d that
* evaluates a Basis1d over each dimension.
* @ref PolynomialBasis1d, @ref Chebyshev1Basis1d, @ref PolynomialBasis2d,
* and @ref Chebyshev1Basis2d provide typedefs to instantiations of these
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These @refs should not be necessary. Do the autolinks not show up in the build?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the autolinks seem to work for the classes and template classes, but I've found I need to use @ref to refer to a using declaration (I'm not sure if that would be true for old-style typedefs or not).

* functions. For example, a ScaledChebyshev1Basis1d combines both the a
* Chebyshev polynomial basis and the scaling from some domain to [-1, 1]
* that enables most special properties of Chebyshevs. When necessary,
* the @ref unscaled functions can be used to convert a scaled polynomial
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a class/namespace called unscaled?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a free function. Added parens to clarify.

* - provide workspace objects to minimize memory allocations when
* appropriate;
* - do not throw exceptions (users are responsible for providing valid
* inputs);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the presence of both scaled and unscaled versions of a basis (and the expectation that code will switch between them frequently), this seems like an API that's easy to use incorrectly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a potential concern. Unfortunately, since both scaled and unscaled versions have unbounded domains, there's no way for this code to verify that calling code is correct in this sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand. Your first example of a scaling is mapping an arbitrary interval to [-1, 1] so that it can be a proper Chebyshev polynomial. That implies that the domains are bounded.

Copy link
Member Author

@TallJimbo TallJimbo Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point; we could have Scaling classes that remember their bounds, and use those to check their inputs. I'm not sure if I'd want those to replace or augment the more general ones we have now, and I still wouldn't want those to have any error checking that can't be compiled away (I expect polynomial evaluations to frequently happen inside inner loops where it's much more efficient to put the error checking on the loop bounds). I'll play around with the idea and see if I can come up with anything I like.

@TallJimbo TallJimbo force-pushed the tickets/DM-13065 branch 2 times, most recently from fbda582 to 7d05d49 Compare July 17, 2018 20:47
@TallJimbo
Copy link
Member Author

In case it's useful to others (but mostly a reminder to myself), my to-do list for this ticket is now:

  • Try to support multiple packing orders by templating PackedIndexRange and PackedIndexIterator on an enum..
  • Try to support (debug-only) bounds checking for (at least some) Scalings.
  • Switch from invert to inverted in anticipation of RFC-500.
  • Come up with a new name for unscaled.

@TallJimbo
Copy link
Member Author

Support for multiple packing orders has been added, and invert has been renamed inverted.

Inspired by @kfindeisen's comment on bounds checking, I am modifying the Scaling classes to always be constructed from a pair of bounds (so they map one interval/box to another interval/box), and to remember the input bounds so they can do range-checking. Accordingly, I'm also renaming them IntervalTransform (1d) and BoxTransform (2d), moving them into geom:: proper, and adding some interoperability between BoxTransform and the existing AffineTransform. I'm going to defer trying to tie the 1-d IntervalTransform class more tightly into geom:: for now, but in the longer term I think a pair of 1-d Box-like Interval objects could be a really nice addition (and possibly something the Box classes should be implemented on top of).

Please shout if any of that causes you concern.

@r-owen
Copy link
Contributor

r-owen commented Jul 18, 2018

I can no longer find the conversation, but regarding a unit test I asked about a factor of 100 increase in rtol and @TallJimbo replied: "The factor of 100 on rtol is related to the lack of numerical stability at https://github.com/lsst/geom/pull/8/files#diff-af047e9f6ac30cc50e40cd0dbffe6dd7R40."

Please add this as a code comment in that unit test.

@TallJimbo
Copy link
Member Author

I'm bailing out on the refactor to add bounds-checking to scaled transforms. It had spiralled into a host of new classes that just made the whole system more confusing, in addition to adding what's effectively bloat to the Scaling classes in the common case where the calling code does not have a logic bug. I could have avoided that bloat with more template parameters on all the Scaled objects, but of course that comes with its own costs in terms of complexity and readability, and it just didn't seem worth it. I'm also worried that any kind of bounds-checking would be subject to spurious failures due to round-off error; I don't see a good way to ensure any extra tolerances are sufficient without knowing what kinds of operations the caller might have called on the inputs to make them go slightly beyond the exact bounds.

So, I think the best approach is to more or less leave this as-is: it will be possible to shoot yourself in the foot by treating Scaled basis as a non-scaled one (or vice versa), but I think that's an acceptable tradeoff for that same substitutability making it possible to write generic code that e.g. fits or evaluates either. In an alternate universe, I'd be curious to try to solve this problem with Strong Types, but that whole approach is so incompatible with the Python type system that I don't think it makes sense for us (even those these objects aren't wrapped for Python, the Point and Box objects they interact with are).

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvements. I found a few minor things and need a bit more time to finish reviewing this package.

* `t.applyInverse(x)` and `r.applyInverse(y)` is equivalent to
* `t.applyForward(y)`.
*/
Scaling1d invert() const noexcept {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I filed RFC-500 https://jira.lsstcorp.org/browse/RFC-500. I hope we can settle that in time to decide what to do here.

* the a Chebyshev polynomial basis and the scaling from some domain to
* [-1, 1] that enables most special properties of Chebyshevs. When
* necessary, the unscaled() functions can be used to convert a
* scaled polynomial function to an equivalent unscaled function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you renamed the unscaled function to simplified. If so, please do a global search.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This last sentence is potentially confusing (what is an equivalent unscaled function?). A few more words would help, such as: "...to convert a scaled polynomial to an equivalent unscaled function by computing new polynomial coefficients. At this time there is no implementation of simplified for Chebyshev polynomials."

/**
* Return the binomial coefficient.
*
* No error checking is performed; the behavior of this method is is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"is is" -> "is"

// have ever calculated in a static matrix. We expand that matrix
// when constructing a BinomialMatrix with nMax higher than we have seen
// before, and wrap that expansion in a mutex for thread safety.
// Reads from the matrix do not require going through the mutex.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it safe to read without a mutex? I worry that a read may be unsafe if the matrix is being swapped (by code in another thread constructing a BinomialMatrix with nMax larger than previously seen).

Copy link
Member Author

@TallJimbo TallJimbo Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think you may be right (or at least I can't be certain that you're wrong). I'll go ahead and remove the static object and just construct a new internal matrix from scratch with every BinomialMatrix instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drat. I was hoping there was some reason this was safe. Would it make any sense to use a mutex for the read? If you risk deadlocks then I agree...dump the static.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I could make it safe by using the same mutex for the read, but if that's necessary both the code complexity and my (wild) guess at the actual performance benefits of having a cache vs. acquiring the lock flips towards things being better without the global cache.

*
* Caller is responsible for ensuring that the given index is valid.
*/
double const & operator[](std::size_t n) const { return begin()[n]; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest documenting these two together with //@{...//@} as you do for getCoefficients next

}
}

static std::size_t getEndX(std::size_t order) { return 0; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document what this does. I'm guessing it returns nx for the end entry (one past the final valid flattened 2d index) but when I first saw that it returns 0 I was very surprised. Similarly for getEndY.

}

/// Return the flattened size of an expansion with the given maximum order (inclusive).
static constexpr std::size_t computeSize(std::size_t order) noexcept {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears this didn't happen. It's a bit less needed now that you have the order trait, but it's still worth considering. If you do it then you probably want to write computeOffset in terms of computeOrder instead of the other way around.


/**
* A specialized iterator range class for PackedIteratorRange, providing
* size calculation, comparison, and range-based `for` support.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PackedIteratorRange doesn't seem to exist (fortunately, as it sounds too much like PackedIndexRange). Do you mean PackedIndexIterator?

Copy link
Member Author

@TallJimbo TallJimbo Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should have been PackedIndexIterator.

*
* The returned polynomial will of course have different coefficients than
* the input one, as these need to account for the scaling without it being
* explicitly applied.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove "of course". Consider something like the following:

"Calculate a standard polynomial function with no scaling that is equivalent to a scaled standard polynomial function

The point is to compute new coefficients that take the scaling into account. One use case is writing SIP coefficients to headers"

* which most of the special functions of the basis are only active when
* the domain is limited to [-1, 1].
*
* This signature requires that the Nested(order) be a valid constructor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest removing "the": "This signature requires that Nested(order) be..." both here and for the 2d version.

Copy link
Contributor

@r-owen r-owen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! A few new things to consider but nothing big.

Point2D point(x, y);
// First check: safe summation in simplified() itself (always
// present), but not when evaluating the polynomials.
CUSTOM_CHECK_CLOSE(sfunc(point), func(point), 100*DEFAULT_RTOL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explain the factor of 100 in a code comment. Why so large? What is the dominant source of error?

binomial(i.ny, k)*vPow[k]*tmp;
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider getting f.getBasis() once and then using that in the rest of the code, to improve readability.

@TallJimbo TallJimbo force-pushed the tickets/DM-13065 branch 4 times, most recently from f62b196 to 3242ed3 Compare August 22, 2018 20:43
@TallJimbo
Copy link
Member Author

@r-owen, I believe I've addressed all of the latest comments (in geom) and as I think they were all quite straightforward, I don't think you need to take another look. You're of course welcome to, and I won't merge until tomorrow morning.

@r-owen
Copy link
Contributor

r-owen commented Aug 22, 2018

I do not feel a need to take another look. Thanks for the updates and explanations!

@TallJimbo TallJimbo merged commit 7ff7895 into master Aug 23, 2018
@TallJimbo TallJimbo deleted the tickets/DM-13065 branch August 23, 2018 17:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants