Skip to content

Conversation

@ukoethe
Copy link
Contributor

@ukoethe ukoethe commented Sep 9, 2017

This PR adds two files and corresponding tests:

  • xconcepts.hpp: concept checking macro XTENSOR_REQUIRE, some traits classes
  • xmathutil.hpp: namespace xt::cmath, additional algebraic functions

It also extends xexception.hpp with two new assertion macros and moves numeric_constants from xmath.hpp to xmathutil.hpp.

The most controversial aspect of the PR is probably the norm() function. It actually returns the norm, whereas std::norm() computes the squared norm. IMHO, this decision of the C++ standard makes no sense at all. Nonetheless, xtensor reproduces this behavior in its xexpressions, and one can argue that consistency with the C++ standard is more important than meeting the user's intuitions about a function's effect. What's your opinion? If you want me to rename my norm(), what's a sensible name?

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 9, 2017

The most controversial aspect of the PR is probably the norm() function. It actually returns the norm, whereas std::norm() computes the squared norm. IMHO, this decision of the C++ standard makes no sense at all. Nonetheless, xtensor reproduces this behavior in its xexpressions, and one can argue that consistency with the C++ standard is more important than meeting the user's intuitions about a function's effect. What's your opinion? If you want me to rename my norm(), what's a sensible name?

On this subject, so far we have advertised xt::math as providing functions from cmath, however, another objective was for the API to be similar to that of numpy, which already conflicted for the case of np.max / np.maximum and std::max...

@JohanMabille do you have an opinion on this?

@SylvainCorlay
Copy link
Member

Quick nitpicking on the formatting.

  1. Convention on indentations and newlines with namespaces

    namespace foo
    {
        class bar;
    }

    (the logic is to not have special cases between classes / namespaces / functions so that there is no cognitive overhead to detect a formatting error when reading the code)

  2. we format the content multi-line macros like the rest of the code. and we add many whitespaces to align the trailing backslashes \. See e.g. https://github.com/QuantStack/xproperty/blob/master/include/xproperty/xproperty.hpp#L93 for an extreme example of that.

  3. We don't add comments on closures of namespaces or inclusion guards.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

Convention on indentations and newlines with namespaces

Fixed.

we format the content multi-line macros like the rest of the code. and we add many whitespaces to align the trailing backslashes

Fixed (however, you added too many whitespaces for the settings of my editor, so the macro formatting was actually broken on my screen :-)

We don't add comments on closures of namespaces or inclusion guards.

I'd like to insist on these. Sinces namespace closings and #endif are often located very far from the corresponding beginning, I found these comments very helpful.

General comment: My experience with VIGRA suggests that overly strict formatting requirements are not worth the trouble. For example: there will never be agreement about where the opening brace { of a scope should go. We are just lucky that I prefer the same convention as you do. In the end, readability is not compromised either way, even if conventions are mixed.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

another objective was for the API to be similar to that of numpy, which already conflicted for the case of np.max / np.maximum and std::max...

This raises a general question: if a function can be applied both element-wise and globally (such as norm() or max()), there should be a clear convention how to distinguish the two possibilities. There must be a better (more self-explanatory) way than calling one max() and the other maximum(). I experimented with norm() vs. elementwise_norm(), but this is too verbose to be satisfactory.

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 9, 2017

Fixed (however, you added too many whitespaces for the settings of my editor, so the macro formatting was actually broken on my screen :-)

Yeah, the xproperty case is a bit extreme. A long as the trailing backslashes are far enough that we can read the code as regular C++ without extra visual things, it is good!

I'd like to insist on these. Sinces namespace closings and #endif are often located very far from the corresponding beginning, I found these comments very helpful.

No problem with the closing comments.

General comment: My experience with VIGRA suggests that overly strict formatting requirements are not worth the trouble. For example: there will never be agreement about where the opening brace { of a scope should go. We are just lucky that I prefer the same convention as you do. In the end, readability is not compromised either way, even if conventions are mixed.

Actually, I use the foo() { convention in the JavaScript codebases that I maintain, so that now, when I see this, main brain wants to read the code as JavaScript :)

In general I prefer having a uniform convention across a project, and this is the one that we picked here...

@SylvainCorlay
Copy link
Member

another objective was for the API to be similar to that of numpy, which already conflicted for the case of np.max / np.maximum and std::max...

This raises a general question: if a function can be applied both element-wise and globally (such as norm() or max()), there should be a clear convention how to distinguish the two possibilities. There must be a better (more self-explanatory) way than calling one max() and the other maximum(). I experimented with norm() vs. elementwise_norm(), but this is too verbose to be satisfactory.

note on this: in xtensor and numpy, things like sum are reducers, in that you can specify a list of axes on which they will be applied, and it returns an un-evaluated expression.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

I experimented with norm() vs. elementwise_norm(), but this is too verbose to be satisfactory.

To clarify: elementwise_norm() makes sense when the value_type is itself a structured type, such as rgb_pixel or tiny_matrix. In these cases, the reduction doesn't go over an axis (or set of axes), but over individual elements.

@SylvainCorlay
Copy link
Member

I experimented with norm() vs. elementwise_norm(), but this is too verbose to be satisfactory.
To clarify: elementwise_norm() makes sense when the value_type is itself a structured type, such as rgb_pixel or tiny_matrix.

Gotcha

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

another objective was for the API to be similar to that of numpy

According to this goal, my convention is preferrable: numpy.linalg.norm() returns the norm, not the squared norm.

@JohanMabille
Copy link
Member

JohanMabille commented Sep 9, 2017

Having the same conventions everywhere in the code is also helpful when you start using linters and some tools performing static code analysis.

I agree with @ukoethe on the closing comments.

About the norm, I think that before C++11 the norm function was available for complex only. Since the function std::abs already returned the magnitude of a complex, a norm function should have a different behavior. They decided to implement the square norm for performance consideration. Then with C++11 they added overlaods for other numerical types. Now I agree the name was poorly chosen.

If I'm not mistaken, there is no norm function in cmath and we shomehow chose arbitrarily to follow the same convention as the one in complex, but this is not irreversible, especially if the name is counter intuitive. (Actually, in an extreme point-of-view, there should not be any function called simply "norm", we should specify in the name of each norm function which norm it implements).

That said, if we decide to change the implementation of norm, I think it might be useful to add a squared_norm for the same considerations as the ones that have led the standard commitee to add the norm function.

@SylvainCorlay
Copy link
Member

About the norm, I think that before C++11 the norm function was available for complex only. Since the function std::abs already returned the magnitude of a complex, a norm function should have a different behavior. They decided to implement the square norm for performance consideration. Then with C++11 they added overloads for other numerical types. Now I agree the name was poorly chosen.

Excellent point

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

Having the same conventions everywhere in the code is also helpful when you start using linters and some tools performing static code analysis.

Agreed, it's definitely preferrable. But given the complexity of advanced C++, most VIGRA contributions had far more pressing issues than formatting, so I eventually relaxed...

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

I think it might be useful to add a squared_norm for the same considerations as the ones that have led the standard commitee to add the norm function.

That is what I propose. We can also consider sq_norm() or norm_sq() for brevity.

Renaming norm() to avoid confusion is also a good idea, but I cannot think of any convincing alternative name. Appending the type of norm (e.g. norm_L2()) is a possibility, but on first glance I prefer numpy's solution to pass the type of norm as an argument.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

Speaking of linters: I think codacy's complaint is a false positive. The value parameter is used in line 220.

@JohanMabille
Copy link
Member

Indeed. Can you try with constexpr instead of const ? (not sure that would fix the problem but it is worth giving it a try)

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 9, 2017

The more I think about the naming of the norm function, the more I like the convention norm_l1(), norm_l2(), norm_sq() etc. This would also allow each kind of norm to return a different type (e.g. size_t would be the most suitable type for the zero norm). One anyway doesn't want a big switch statement in the inner loop to choose the norm according to an argument of the norm function. The crucial question is if there are any important use cases where the appropriate norm can only be determined at runtime.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 10, 2017

Can you try with constexpr instead of const ?

Didn't work either. Moreover, it doesn't feel right to implement workarounds for a broken sanity checker, one should probably file a bug report with the Codacy team.

@SylvainCorlay
Copy link
Member

I agree. We don't need to pass every codacy checks to merge PRs but it has proven useful sometimes to detect legitimate issues.

@SylvainCorlay
Copy link
Member

Although what can be constexpr should probably be constexpr (even if it does not fix the warning).

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 10, 2017

I revised the PR and replaced norm() with a set of specific norm functions: norm_l0(), norm_l1(), norm_l2(), norm_sq(), norm_max(). If you prefer, I can rename the latter into norm_l2sq() and norm_linf(). So far, these functions are only implemented for the scalar types (plus l2 and sq for complex).

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 10, 2017

Although what can be constexpr should probably be constexpr (even if it does not fix the warning).

static const int and constexpr int are supposed to behave identically, no need to prefer either.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 12, 2017

The following comment by @wolfv got lost:

Btw. norm is also implemented through BLAS (where available) and C++ in xtensor-blas. I've used a enum to disambiguate different norms:

https://github.com/QuantStack/xtensor-blas/blob/cf0993a7cc2ad6fdb19d3f14e0304a1d83a8e94e/include/xtensor-blas/xlinalg.hpp#L35-L40

I wonder if we should try to be consistent here? I am open to change the xtensor-blas interface, too. I don't think there is a big overhead of setting the norm via enum. If it's done at compile time I'd hope that the compiler select the correct branch, already.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 12, 2017

I vote for not using the name norm() in xtensor, because it will be confusing however you define it (it will contradict std::norm, numpy.linalg.norm or human intuition or all of them).

Regarding the current definition of norm() in https://github.com/QuantStack/xtensor-blas: The meaning of norm(a, 1) even depends on the dimension of array a: It resolves to the L1 norm if D=1, the induced 1-norm if D=2, and an error if D>2. Similarly, norm(a, 2) is the L2-norm if D=1, the spectral norm if D=2 and an error otherwise. Frobenius, nuclear and induced infinity-norm of a matrix are selected via an enum, and the max-norm of a matrix requires flattening. This is also pretty confusing (and poorly documented).

So, making the type of norm explicit in the function name seems to be the cleanest solution.

Copy link
Member

@SylvainCorlay SylvainCorlay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few inline comments and the following:

  • the new xt::isclose added in this PR conflicts with the one in xmath.hpp. Note the xtensor universal functions accept both expressions and scalars, by wrapping scalars into a cheap xscalar expression. So this is a bug. Maybe a better approach would be to modify the existing implementation to handle infinities like the one you are proposing.

  • This comment also applies to a lot of the new scalar operators such as dot which is coming up when we add the tensor dots. In general, I think that we should rather add new universal functions than scalar specific things.

  • The promote_t seems to be justified by the difference of behavior wrt std::common_type for integer types and the return type of arithmetic operations. In terms of coding style, we should probabably follow STL style, i.e. promote_type/promote_type_t, move it to xutils, and make it variadic like std::common_type.

  • A lot of what is in xconcepts does not really concern concepts. The new type traits should probably be in xutils.

  • There are some coding style points: macros should always be capitalized, some namespaces are not indented, and we have a format for section comments,

********************************************
* one space on each side and no empty line *
********************************************
  • We were looking at this together with Johan in the early afternoon and are a bit confused with the raison d'être of the norm traits, and guess you plan on using it for future development.

template <bool CONCEPTS>
using concept_check = typename std::enable_if<CONCEPTS, require_ok>::type;

/** @brief Concept checking macro (more redable than sfinae).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redable -> readable

For example, it tells the user that <tt>unsigned char + unsigned char => int</tt>.
*/
template <class T1, class T2 = T1>
using promote_t = decltype(*(std::decay_t<T1>*)0 + *(std::decay_t<T2>*)0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Difference with std::common_type seems to justify separate type

  • move to xutils / xtl?
  • use naming convention promote_type / promote_type_t?
  • variadic version?

template<class T>
struct squared_norm_traits;

namespace concepts_detail {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

formatting

#define XTENSOR_ASSERT_MSG(PREDICATE, MESSAGE)
#endif

#define xtensor_precondition(PREDICATE, MESSAGE) \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capitalize macro

EXPECT_TRUE(0 == expected.compare(message.substr(0,expected.size())));
}
}
} // namespace xt No newline at end of file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newline


// scalar dot is needed for generic functions that should work with
// scalars and vectors alike
#define XTENSOR_DEFINE_SCALAR_DOT(T) \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would conflict with a generic xt::dot that takes expressions. Indeed, we wrap scalars into cheap xexpressions with xscalar. I don't think we need to add operators on scalars.


/**********************************************************/
/* */
/* sq() */
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sqr?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to keep an edit distance of 2 to sqrt(). sqr() looks too much like 'did you mean sqrt()?'

EXPECT_EQ(norm_sq(-2.5), 6.25);

std::complex<double> c{ 2.0, 3.0 };
// EXPECT_EQ(norm_sq(c), 13.0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outstanding debug code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a strange beast I have to dig into: The test failed on some versions of gcc with the error message 'expected 13, got 13', i.e. the typical error when two reals are off in the last bit. However, this should never happen in an integer expression like 2.0*2.0 + 3.0*3.0. I'm wondering if there is a bug in std::norm()?


namespace concepts_detail {

template <class T, bool scalar = std::is_arithmetic<T>::value>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indenting in namespaces.

namespace cmath
{

using std::abs;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, these are already available in the xmath namespace in the form of universal functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention of this namespace is different: I wanted to put the standard algebraic functions into a namespace of their own, so that one can use the idiom

    using namespace cmath;
    auto x = sqrt(y);

to have the compiler perform both argument dependent lookup and lookup in namespace std (if y is a scalar). Without namespace cmath, one would have to write using std::sqrt; (for every function one wants to use) or using namespace std; (possibly importing functions one didn't want to import).

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

The promote_t seems to be justified by the difference of behavior wrt std::common_type for integer types and the return type of arithmetic operations.

The main purpose of promote and norm traits is to prevent overflow in array expressions, and to make this intention explicit. The following scenarios are common examples in image processing:

xarray<uint8_t> a = ..., b = ...;  // uint8_t is a popular value_type for images
xarray<promote_t<uint8_t>> c = a + b;

If c were defined as xarray<uint8_t> as well, addition would almost certainly overflow, leading to incorrect results. Similarly,

xarray<uint8_t>::shape_type shape { 1000, 1000, 1000}; // not an uncommon size nowadays
xarray<uint8_t> a(shape);
... // fill the array
norm_sq_t<uint8_t> squared_norm = norm_sq(a); 

Since the square of an 8-bit number needs 16 bits, and there are 2^30 pixels, the squared norm type must have at least 46 bits, i.e. it must be uint64_t.

I'm aware that these problems can be solved in several ways, but I like the possibility to make the desired promotions explicit (remember the Zen of Python :-).

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

A lot of what is in xconcepts does not really concern concepts. The new type traits should probably be in xutils.

I agree that xconcepts may not be the best place for traits classes, but xutils doesn't feel right either. How about a new header xtraits?

@SylvainCorlay
Copy link
Member

We understand the need for promote_type and promote_type_t. My comment on that is that it should probably be STL style, variadic, and replace common_type_t in most places where we use it.

Just that part can deserve a PR :)

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 15, 2017

Regarding xutils vs xtraits, I would put everything in xutils for now. The reason is that we will be stripping out a lot of xutils into a separate xtl package, which will be a common dependency to xeus, xtensor, xwidgets etc...

So xutils is a good place for these to land.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

Note the xtensor universal functions accept both expressions and scalars, by wrapping scalars into a cheap xscalar expression.

I didn't realize that scalars are valid arguments to your implementation, because the documentation of isclose says: 'param e1: input array to compare'.

the new xt::isclose added in this PR conflicts with the one in xmath.hpp. So this is a bug.

I had indeed addressed this bug in my original PR #374, but forgot about it when splitting up that PR.

In fact, I'm concerned about xtensor defining xexpression templates with universal template arguments like this:

template <class E1, class E2>
inline auto isclose(E1&& e1, E2&& e2, double rtol = 1e-05, double atol = 1e-08, bool equal_nan = false) noexcept

In my experience, declarations of the form template <class T> foo(T) are too greedy: they are often selected when the user actually intends to call a more specialized function variant, which is, however, considered an inferior match by the compiler (e.g. because it involves a user-defined conversion). Since C++'s rules for overload resolution and template argument deduction are extremely complicated, such errors are hard to avoid and correct. Therefore I adopted a strict rule in my code:

Universal templates must be constrained by a concept declaration.

In PR #374, I had changed the signature of isclose to

template <class E1, class E2,
          XTENSOR_REQUIRE<xexpression_concept<E1>::value || xexpression_concept<E2>::value> >
inline auto isclose(E1&& e1, E2&& e2, double rtol = 1e-05, double atol = 1e-08, bool equal_nan = false) noexcept

and I strongly recommend to adopt a convention like this for all xexpression functions. (I'm willing to implement it if you agree to the change.)

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

Regarding the existing isclose implementation:

  • Why is it defined such that isclose(a, b) != isclose(b, a)? People won't expect that.
  • Why is the check performed against the sum of absolute and relative error bounds? That's also surprising.
  • The use of std::abs in an xexpression prevents argument dependent lookup of the abs()-function. The recommended idiom is
    using std::abs;  // or:  using namespace xt::cmath;
    return abs(a - b) ...;

Maybe a better approach would be to modify the existing implementation

I'm still not convined that functions should only be implemented as xexpressions. A botton-up design (implement the function for scalars first, then call the scalar version in the array expressions, just as you do for sqrt() etc.) would probably be more familiar for most users and thus more readable and easier to debug. I suggest to keep the scalar isclose() and modify struct isclose to just call it.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

When an xexpression function should also match scalar arguments, I suggest to add the concept check:

template <class E,
          XTENSOR_REQUIRE<xexpression_concept<E>::value || std::is_arithmetic<E>::value>>

@SylvainCorlay
Copy link
Member

Regarding isclose(a, b) != isclose(b, a), the scalar functor is probably not the best, and its implementation should be changed.

All ufuncs accept scalars, so changing this would be a very major change to do for all ufuncs. Maybe an array-likenumpy-style concept would be a good idea, but we should not make this in scope for this PR IMO.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

All ufuncs accept scalars, so changing this would be a very major change to do for all ufuncs.

Here is an argument for changing the behavior nonetheless: I disabled my isclose() function and implemented the corresponding test in terms of the ufunc, e.g.

EXPECT_FALSE(isclose(numeric_constants<>::PI, 3.141));

However, the test didn't compile because the ufunc's type is xfunction instead of bool. Replacing this with

EXPECT_FALSE(eval(isclose(numeric_constants<>::PI, 3.141)));

didn't work either because the result type is now xtensor_container. Writing

EXPECT_FALSE(eval(isclose(numeric_constants<>::PI, 3.141))[0]);

worked, but is really ugly and probably very slow due to array creation.

I really don't see why ufuncs and lazy evaluation should be used for function calls that involve only scalar arguments.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

... just realized that the correct call is

EXPECT_FALSE(isclose(numeric_constants<>::PI, 3.141)[0]);

but this is still ugly.

@wolfv
Copy link
Member

wolfv commented Sep 15, 2017

You should also be able to do

EXPECT_FALSE(isclose(numeric_constants<>::PI, 3.141)());

which is slightly less ugly maybe.

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 15, 2017

... just realized that the correct call is

EXPECT_FALSE(isclose(numeric_constants<>::PI, 3.141)[0]);
but this is still ugly.

ufuncs return xexpressions, even in the 0-D case.

operator() has a uniform behavior. When passing too many arguments, only the last D are used where D is the dimension. For a 0-D array, you can then just call operator() with zero argument as explained by @wolfv. See http://xtensor.readthedocs.io/en/latest/numpy-differences.html for example.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

Ok, isclose(numeric_constants<>::PI, 3.141)() is slightly better, but still unacceptable as a general idiom for additional algebraic functions on scalars. Suppose a project combines xtensor with boost.math, both implementing additional scalar functions, but with different idioms. It's a mess!

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 15, 2017

Ok, isclose(numeric_constants<>::PI, 3.141)() is slightly better, but still unacceptable as a general idiom for additional algebraic functions on scalars. Suppose a project combines xtensor with boost.math, both implementing additional scalar functions, but with different idioms. It's a mess!

Unacceptable is a strong statement!

Universal functions can only return xexpressions even when it is 0-D since dimension is only know upon evaluation. It is a runtime attribute. So all ufuncs return (unevaluated) expressions.

So as you said, ufuncs cannot be seen as overloads of scalar functions.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

Unacceptable is a strong statement!

Sorry for the wording.

Universal functions can only return xexpressions even when it is 0-D since dimension is only know upon evaluation. It is a runtime attribute. So all ufuncs return (unevaluated) expressions.

I fully accept this, but it misses the point. I just suggested to implement scalar functions as normal scalar functions and leave xexpressions to array arithmetic. I really didn't expect that this statement would be controversial.

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 15, 2017

Unacceptable is a strong statement!

Sorry for the wording.

no problem

Universal functions can only return xexpressions even when it is 0-D since dimension is only know upon evaluation. It is a runtime attribute. So all ufuncs return (unevaluated) expressions.

I fully accept this, but it misses the point. I just suggested to implement scalar functions as normal scalar functions and leave xexpressions to array arithmetic. I really didn't expect that this statement would be controversial.

I think we should open a discussion issue on the behavior of ufuncs.

Maybe a solution would be

  • When any of the argument is an expression, return an expression (even 0-D).
  • When all arguments are scalars, return a scalar, or even better, something like an owning xscalar<T> except that it would be implictly convertible to the underlying scalar.

In any case, I think this deserves its own issue. (i.e. behavior of ufuncs with scalar inputs)

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

When any of the argument is an expression, return an expression (even 0-D).

Yes.

When all arguments are scalars, return a scalar, or even better, something like an owning xscalar except that it would be implictly convertible to the underlying scalar.

This could be done, but requires significant metaprogramming on the xexpression's result type, when a straightforward function implementation would be sufficient. According to my VIGRA experience, that's the kind of cleverness my users don't like at all (and I fell into this trap several times). The Zen of Python again:

Explicit is better than implicit.
Simple is better than complex.

Note that there will be no code duplication, because the xexpressions just call the corresponding scalar functions.

@SylvainCorlay
Copy link
Member

SylvainCorlay commented Sep 15, 2017

I think this can be done with a reasonable amount of code in xscalar, but it is also likely to be very subtle. xscalar and xfunctions are proven difficult beasts, so think would rather have a well-scoped discussion for this.

@ukoethe
Copy link
Contributor Author

ukoethe commented Sep 15, 2017

In any case, I think this deserves its own issue. (i.e. behavior of ufuncs with scalar inputs)

See #412

@SylvainCorlay
Copy link
Member

Closing the megathread as we are handling this in multiple PRs!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants