Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
e9cf857
added FIXME
ukoethe Jul 28, 2017
802c6ec
first running version of TinyArray and tests (needs clean-up)
ukoethe Jul 28, 2017
c764648
fixed comment
ukoethe Jul 28, 2017
73ba2e3
cleaned up constructors
ukoethe Jul 29, 2017
1ec55ca
switched to lower-case naming convention; refactored type inference
ukoethe Jul 29, 2017
488385e
finished first refactoring stage
ukoethe Jul 29, 2017
ac2aa87
replace decay => decay_t
ukoethe Jul 29, 2017
e4a7bb9
improved isclose()
ukoethe Jul 30, 2017
77f6621
refactored algebraic operations
ukoethe Jul 30, 2017
c50d1bc
enabled concept check in xexpression functions
ukoethe Jul 30, 2017
13969b0
added another concept check
ukoethe Jul 30, 2017
098908c
improved pow(tiny_array)
ukoethe Jul 30, 2017
c27819c
changed const -> constexpr
ukoethe Jul 30, 2017
298bd23
improved type inference, finished transition to lowercase
ukoethe Jul 30, 2017
10e7bb9
minor fixes
ukoethe Jul 30, 2017
85f9f60
fixed warnings
ukoethe Jul 30, 2017
1b7a2c7
fixed warnings
ukoethe Jul 30, 2017
bf747c4
fixed warnings
ukoethe Jul 30, 2017
bc10395
fixed warnings
ukoethe Jul 30, 2017
07f01e6
created xmathutil.hpp
ukoethe Jul 30, 2017
7dd9b64
replaced std::array => tiny_array
ukoethe Jul 30, 2017
55b9d22
added xreducer::operator() with zero arguments
ukoethe Jul 30, 2017
6d43ea4
fixed ambiguities
ukoethe Jul 30, 2017
2f1d7ea
replaced std::vector => tiny_array<..., runtime_size>
ukoethe Jul 31, 2017
1fab593
resolved FIXMEs
ukoethe Jul 31, 2017
9064941
removed incorrect 'typename'
ukoethe Jul 31, 2017
eed85b0
more dyn_shape replacements
ukoethe Jul 31, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions include/xtensor/xadapt.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ namespace xt
*/
template <class C, std::size_t N, layout_type L = DEFAULT_LAYOUT>
xtensor_adaptor<C, N, L>
xadapt(C& container, const std::array<typename C::size_type, N>& shape, layout_type l = L);
xadapt(C& container, const stat_shape<typename C::size_type, N>& shape, layout_type l = L);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argh, I can't comment on the lines above, but there is still a std::enable_if_t<!detail::is_array<SC>::value... which obviously should be replaced. Maybe we can have something like detail::is_static_container or detail::is_static_size?
This currently prevents building of the xadapt tests with GCC.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I just discovered your new overload further down. Unfortunately it doesn't seem to be picked up. I'd love to investigate more, but lack the time (it's on Ubuntu 16.04 / GCC 5.4).


/**
* Constructs an xtensor_adaptor of the given stl-like container,
Expand All @@ -100,7 +100,7 @@ namespace xt
*/
template <class C, std::size_t N>
xtensor_adaptor<C, N, layout_type::dynamic>
xadapt(C& container, const std::array<typename C::size_type, N>& shape, const std::array<typename C::size_type, N>& strides);
xadapt(C& container, const stat_shape<typename C::size_type, N>& shape, const stat_shape<typename C::size_type, N>& strides);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in terms of naming i would prefer xshape and then decide on it being static vs. dynamic based on the template arguments. Is it necessary to have dyn_shape vs stat_shape?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ie. xshape<3> == stat_shape, and xshape() == dyn_shape...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On that note, I also think we shouldn't use the size_t template everywhere. It would be much better to define the xshape<xt::index_t> somewhere and then have it consistent across the library and only one place to change it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that xshape<3> and xshape<> are better alias types in the long run. During the experimentation phase, the present definition has the advantage of making the old and new shape types exchangeable.


/**
* Constructs an xtensor_adaptor of the given dynamically allocated C array,
Expand All @@ -116,7 +116,7 @@ namespace xt
template <class P, std::size_t N, class O, layout_type L = DEFAULT_LAYOUT, class A = std::allocator<std::remove_pointer_t<P>>>
xtensor_adaptor<xbuffer_adaptor<std::remove_pointer_t<P>, O, A>, N, L>
xadapt(P& pointer, typename A::size_type size, O ownership,
const std::array<typename A::size_type, N>& shape, layout_type l = L, const A& alloc = A());
const stat_shape<typename A::size_type, N>& shape, layout_type l = L, const A& alloc = A());

/**
* Constructs an xtensor_adaptor of the given dynamically allocated C array,
Expand All @@ -132,7 +132,7 @@ namespace xt
template <class P, std::size_t N, class O, class A = std::allocator<std::remove_pointer_t<P>>>
xtensor_adaptor<xbuffer_adaptor<std::remove_pointer_t<P>, O, A>, N, layout_type::dynamic>
xadapt(P& pointer, typename A::size_type size, O ownership,
const std::array<typename A::size_type, N>& shape, const std::array<typename A::size_type, N>& strides, const A& alloc = A());
const stat_shape<typename A::size_type, N>& shape, const stat_shape<typename A::size_type, N>& strides, const A& alloc = A());

/*****************************************
* xarray_adaptor builder implementation *
Expand Down Expand Up @@ -176,22 +176,22 @@ namespace xt

template <class C, std::size_t N, layout_type L>
inline xtensor_adaptor<C, N, L>
xadapt(C& container, const std::array<typename C::size_type, N>& shape, layout_type l)
xadapt(C& container, const stat_shape<typename C::size_type, N>& shape, layout_type l)
{
return xtensor_adaptor<C, N, L>(container, shape, l);
}

template <class C, std::size_t N>
inline xtensor_adaptor<C, N, layout_type::dynamic>
xadapt(C& container, const std::array<typename C::size_type, N>& shape, const std::array<typename C::size_type, N>& strides)
xadapt(C& container, const stat_shape<typename C::size_type, N>& shape, const stat_shape<typename C::size_type, N>& strides)
{
return xtensor_adaptor<C, N, layout_type::dynamic>(container, shape, strides);
}

template <class P, std::size_t N, class O, layout_type L, class A>
inline xtensor_adaptor<xbuffer_adaptor<std::remove_pointer_t<P>, O, A>, N, L>
xadapt(P& pointer, typename A::size_type size, O,
const std::array<typename A::size_type, N>& shape, layout_type l, const A& alloc)
const stat_shape<typename A::size_type, N>& shape, layout_type l, const A& alloc)
{
using buffer_type = xbuffer_adaptor<std::remove_pointer_t<P>, O, A>;
buffer_type buf(pointer, size, alloc);
Expand All @@ -201,7 +201,7 @@ namespace xt
template <class P, std::size_t N, class O, class A>
inline xtensor_adaptor<xbuffer_adaptor<std::remove_pointer_t<P>, O, A>, N, layout_type::dynamic>
xadapt(P& pointer, typename A::size_type size, O,
const std::array<typename A::size_type, N>& shape, const std::array<typename A::size_type, N>& strides, const A& alloc)
const stat_shape<typename A::size_type, N>& shape, const stat_shape<typename A::size_type, N>& strides, const A& alloc)
{
using buffer_type = xbuffer_adaptor<std::remove_pointer_t<P>, O, A>;
buffer_type buf(pointer, size, alloc);
Expand Down
2 changes: 1 addition & 1 deletion include/xtensor/xarray.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ namespace xt
* xarray_adaptor declaration *
******************************/

template <class EC, layout_type L = DEFAULT_LAYOUT, class SC = std::vector<typename EC::size_type>>
template <class EC, layout_type L = DEFAULT_LAYOUT, class SC = dyn_shape<typename EC::size_type>>
class xarray_adaptor;

template <class EC, layout_type L, class SC>
Expand Down
1 change: 1 addition & 0 deletions include/xtensor/xassign.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ namespace xt
shape_type shape = make_sequence<shape_type>(dim, size_type(1));
bool trivial_broadcast = de2.broadcast_shape(shape);

// FIXME: The second comparison is lexicographic. Comment why this is intended.
if (dim > de1.dimension() || shape > de1.shape())
{
typename E1::temporary_type tmp(shape);
Expand Down
4 changes: 2 additions & 2 deletions include/xtensor/xbroadcast.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -159,15 +159,15 @@ namespace xt
template <class E, class I>
inline auto broadcast(E&& e, std::initializer_list<I> s) noexcept
{
using broadcast_type = xbroadcast<const_xclosure_t<E>, std::vector<std::size_t>>;
using broadcast_type = xbroadcast<const_xclosure_t<E>, dyn_shape<std::size_t>>;
using shape_type = typename broadcast_type::shape_type;
return broadcast_type(std::forward<E>(e), forward_sequence<shape_type>(s));
}
#else
template <class E, class I, std::size_t L>
inline auto broadcast(E&& e, const I (&s)[L]) noexcept
{
using broadcast_type = xbroadcast<const_xclosure_t<E>, std::array<std::size_t, L>>;
using broadcast_type = xbroadcast<const_xclosure_t<E>, stat_shape<std::size_t, L>>;
using shape_type = typename broadcast_type::shape_type;
return broadcast_type(std::forward<E>(e), forward_sequence<shape_type>(s));
}
Expand Down
55 changes: 41 additions & 14 deletions include/xtensor/xbuilder.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ namespace xt
* @return xgenerator that generates the values on access
*/
template <class T = bool>
inline auto eye(const std::vector<std::size_t>& shape, int k = 0)
inline auto eye(const dyn_shape<std::size_t>& shape, int k = 0)
{
return detail::make_xgenerator(detail::fn_impl<detail::eye_fn<T>>(detail::eye_fn<T>(k)), shape);
}
Expand Down Expand Up @@ -379,7 +379,8 @@ namespace xt

private:

inline value_type access_impl(xindex idx) const
template <class T, class A>
inline value_type access_impl(std::vector<T, A> idx) const
{
auto get_item = [&idx](auto& arr)
{
Expand All @@ -390,6 +391,18 @@ namespace xt
return apply<value_type>(i, get_item, m_t);
}

template <class T>
inline value_type access_impl(tiny_array<T, runtime_size> const & old_idx) const
{
size_type i = old_idx[m_axis];
auto idx = old_idx.erase(m_axis);
auto get_item = [&idx](auto& arr)
{
return arr[idx];
};
return apply<value_type>(i, get_item, m_t);
}

const std::tuple<CT...> m_t;
const size_type m_axis;
};
Expand All @@ -412,7 +425,7 @@ namespace xt
template <class... Args>
value_type operator()(Args... args) const
{
std::array<size_type, sizeof...(Args)> args_arr = {static_cast<size_type>(args)...};
stat_shape<size_type, sizeof...(Args)> args_arr = {static_cast<size_type>(args)...};
return m_source(args_arr[m_axis]);
}

Expand Down Expand Up @@ -478,6 +491,12 @@ namespace xt
return temp;
}

template <class T, int N>
inline auto add_axis(tiny_array<T, N> arr, std::size_t axis, std::size_t value)
{
return arr.insert(axis, value);
}

template <class T>
inline T add_axis(T arr, std::size_t axis, std::size_t value)
{
Expand Down Expand Up @@ -520,7 +539,7 @@ namespace xt
inline auto meshgrid_impl(std::index_sequence<I...>, E&&... e) noexcept
{
#if defined X_OLD_CLANG || defined _MSC_VER
const std::array<std::size_t, sizeof...(E)> shape { e.shape()[0]... };
const stat_shape<std::size_t, sizeof...(E)> shape { e.shape()[0]... };
return std::make_tuple(
detail::make_xgenerator(
detail::repeat_impl<xclosure_t<E>>(std::forward<E>(e), I),
Expand Down Expand Up @@ -652,7 +671,7 @@ namespace xt
template <class... Args>
inline value_type operator()(Args... args) const
{
std::array<size_type, sizeof...(Args)> idx({static_cast<size_type>(args)...});
stat_shape<size_type, sizeof...(Args)> idx({static_cast<size_type>(args)...});
return access_impl(idx.begin(), idx.end());
}

Expand Down Expand Up @@ -723,23 +742,31 @@ namespace xt
{
using type = std::array<I, L - 1>;
};

template <class I, int L>
struct diagonal_shape_type<tiny_array<I, L>>
{
using type = std::conditional_t<(L > 0),
tiny_array<I, L - 1>,
tiny_array<I, runtime_size>>;
};
}

/**
* @brief Returns the elements on the diagonal of arr
* If arr has more than two dimensions, then the axes specified by
* axis_1 and axis_2 are used to determine the 2-D sub-array whose
* diagonal is returned. The shape of the resulting array can be
* determined by removing axis1 and axis2 and appending an index
* If arr has more than two dimensions, then the axes specified by
* axis_1 and axis_2 are used to determine the 2-D sub-array whose
* diagonal is returned. The shape of the resulting array can be
* determined by removing axis1 and axis2 and appending an index
* to the right equal to the size of the resulting diagonals.
*
* @param arr the input array
* @param offset offset of the diagonal from the main diagonal. Can
* be positive or negative.
* @param axis_1 Axis to be used as the first axis of the 2-D sub-arrays
* from which the diagonals should be taken.
* @param axis_2 Axis to be used as the second axis of the 2-D sub-arrays
* from which the diagonals should be taken.
* @param axis_1 Axis to be used as the first axis of the 2-D sub-arrays
* from which the diagonals should be taken.
* @param axis_2 Axis to be used as the second axis of the 2-D sub-arrays
* from which the diagonals should be taken.
* @returns xexpression with values of the diagonal
*
* \code{.cpp}
Expand Down Expand Up @@ -810,7 +837,7 @@ namespace xt
* @brief Reverse the order of elements in an xexpression along the given axis.
* Note: A NumPy/Matlab style `flipud(arr)` is equivalent to `xt::flip(arr, 0)`,
* `fliplr(arr)` to `xt::flip(arr, 1)`.
*
*
* @param arr the input xexpression
* @param axis the axis along which elements should be reversed
*
Expand Down
Loading