Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion for 'Transforming `std::find` into `std::ranges::find`' #5

cjdb opened this issue May 29, 2018 · 1 comment

Discussion for 'Transforming `std::find` into `std::ranges::find`' #5

cjdb opened this issue May 29, 2018 · 1 comment


Copy link

@cjdb cjdb commented May 29, 2018

Please be aware of the Code of Conduct when discussing.

@cjdb cjdb added the discussion label May 29, 2018
Copy link

@MathiasMagnus MathiasMagnus commented Jul 24, 2018

Hi Christopher!

The two blog posts available give a gentle introduction to the topic, yet provide ample food for thought. Can't wait till you reach the numeric stuff. Just make sure not to promise too many blog posts on the sidelines you know you could write up, but might never arrive to actually do. :) Unimportnat note: there is a typo in the code snippet template <InputIterator I, Sentinel<I> S, EqualityComparableWith<reference_t<I>> T> where I believe a comma is missing.

One thing I have been thinking about for quite some time, though never pursued to lengths enough to even write a blog post about is conceptifying calculus, algebra, call it what you may. I know, it's been done a long time ago and it's called mathematics, but to actually implement the part that a compiler can check. Things akin to those you mentioned when we met, starting from algebraic sets, followed by number types up to a point where one could say using Nat = std::ℕ<std::int32_t>. This is the point where I declare to henceforth say Nat whenever I think of a natural number in the mathematical sense, and I choose std::int32_t as a type that models it to a reasonable extent. When I say model, I naturally (ha!) mean it satisfies a concept, and when I say "to a reasonable extent", you will rightfully add: that is not a well-defined concept. This part should be subject to discussion, whether I want errors/excpetions to arise when my choice of a modeling type could not get it's job done (overflow, underflow, etc.), or chosing such a type is equivalent to an explicit promise (an axiom per say), that the following program shall (and will) have mathematically sound and meaningful result, if my choice of std::int32_t always behaved as the mathematical construct it is meant to model. Using Nat over built-in types could save one from peculiarities such as this. Of course, this makes a lot of sense when dealing with std::size_t and std::ptrdiff_t, but not that much when I would like to be reminded that the result of some operation under the sole assumption of input types (not involving interval arithmetics) may not be representable in the given set, much like when mixing fixed- and floating-point calculations.

Having well behaving numbers one can venture into N-dimensional vector spaces, where the vector space is still free of the choice of it's basis. Operating on vectors from the same vector space but with different base vectors is possible but ambigous (which basis should the result use), so defining that as an expression template to be consumed by an assignment seems reasonable. It's not alien to the STL (think std::valarray, which is also legally subject to, moreover encouraged to be implemented via expression templates).

This way one might actually do graphics (and physics) in a cleaner way, where coordinate transformations happen on just about ever corner. I'd mich rather define my coordinate systems and their relations once and deal with the input-output only, and not the actual series of operations that implement my calculations. Instead of having to deal with the canonical MVP matrices, I only define the series of transformations. (More on this later.)

Because I only have real-world code from physics, consider something like (explanation after code snippet):

// Expand an analytic spherical function 'f' defined in spherical-coordinates over the spin-weighted spherical harmonics
// Effectively changes the signature from {rho,theta,phi} to {rho,{l,m,s}}
// Uses fixed count function evaluations (assumes worst case scenario)
auto series_expand = [=,
                      m = max_theta_evals,
                     n = max_phi_evals](const auto f)
    return [=](const real rho, const spherical_index idx)
        return math::chebysev<real>(0, pi, m, [=](const real& theta)
            return math::periodic<real>(0, 2 * pi, n, [=](const real& phi)
#ifdef _DEBUG
                floating::scoped_exception_enabler fpe;
                return std::conj(math::Y_lms(theta, phi, idx.l, idx.m, idx.s)) * f(rho, theta, phi) * std::pow(rho, 2) * std::sin(theta);

// Reduce a spin-weighted spherical expansion to the original analytic spherical function
// Effectively changes the signature from {rho,{l,m,s}} to {rho,theta,phi}
auto reduce_series = [=](const auto f)
    return [=,
            ext = spherical_extent({ 0, 0, 0 },
                                   { L_max, L_max, 0 })](const real rho,
                                                         const real, // theta
                                                         const real) // phi
        using F = decltype(f);
        using T = std::result_of_t<F(const real, const spherical_index)>;

        return std::real(std::accumulate(ext.cbegin(),
                                         [=](const T& cum_sum,
                                             const spherical_index& idx)
             return cum_sum + f(rho, idx);

auto descartes_to_spherical = [](const auto f)
    return [=](const real rho,
               const real theta,
               const real phi)
        return f(rho*std::sin(theta)*std::cos(phi),

auto spherical_to_descartes = [](const auto f)
    return [=](const real x, const real y, const real z)
        /// <cite></cite>
        return f(std::sqrt(std::pow(x, 2) + std::pow(y, 2) + std::pow(z, 2)),
                 (real)1 / std::tan(y / x),
                 (real)1 / std::tan(std::sqrt(std::pow(x, 2) + std::pow(y, 2)) / z));

I'm using real = double, not using real = std::ℝ<double> unfortunately. :(

Let's say I'm working with functions expanded over the basis of spin-weighted spherical harmonics, a generalization of the slightly more common spherical harmonics. TL;DR: It's a set of functions that behave well when describing systems with near spherical symmetry, think the geoid shape of the Earth for instance. The totally symmetric part is the first base function (l=0, m=0, and s=0 for the spin case). The first two functions, series_expand and reduce_series do just like the comments say, they change coordinate systems in a slightly more complicated (less familiar) way than an affine transformation in graphics does. (I omitted parts of the production code, such as adaptive_series_expand and the actual implementation of math::Y_lms, as it doesn't add value relevant to the point being made.) series_expand takes a function, multiplies it by the complex conjugate of the base function and the differential volume element. Both are inherent properties of the coordinate system, the base function set, and eventually I would like to write a generic version of this function that can expand over any series of base functions. For end-users of such a template function, the arguments need be constrained via mathematical concepts, presumably much of which is on the way to standardization. descartes_to_spherical and spherical_to_descartes are nomen est omen. (In fact, the first two should be named spherical_to_harmonics and vice-versa.)

Finally, a Monte-Carlo unit-test of the above transformations may be defining the following two functions:

auto ellipsoid = [](real x,
                    real y,
                    real z)
    constexpr real a = (real)2.0;
    constexpr real b = (real)2.0;
    constexpr real c = (real)2.0;

    return std::sqrt(std::pow(a * x, 2) + std::pow(b * y, 2) + std::pow(c * z, 2));

auto reconstructed_ellipsoid = spherical_to_descartes(reduce_series(adaptive_series_expand(descartes_to_spherical(ellipsoid))));

With a random input of {x,y,z} values, one can test how well the original ellipsoid and its summed up series expansion respectively are in accordance. Note that a slightly more natural syntax, not subject to parenthesis insanity that can be read in logical order may be given (use | if you've a sysadmin/shell background and you're more used to seeing data flow with the pipe character instead of the arrow-ish bitwise/stream operator):

auto reconstructed_ellipsoid = ellipsoid >> descartes_to_spherical >> adaptive_series_expand >> reduce_series >> spherical_to_descartes;

For the general case of coordinate transformations though, it is very likely that just like the Special Math TR made it into the ISO standard, there is/will be a need for a Special Math Concepts TR, with concepts such as HausdorffSpace (or CompactSpace, if we are against having to remember names of people) for "non-holed" manyfolds, BanachSpace for vector spaces with well defined norms (std::distance), and HilbertSpace even, which might be an infinite dimension space. (I'd simply love to see stuff like the infinite series of SWS harmonic functions be pre-defined as a constexpr, infinite range of RegularInvocables, only to have me constrain it with a cut-off number which will take the first N from the range. DFTs are no different: take the first few from the infinite basis of trigonometric polynomials (aka Fourier Space) and series expand over them.

Concerning the way larger audience of those pursuing graphics, you (Christopher, not other readers of this brainstorm) have already heard how I'm interested in single-source graphics and compute. I'd imagine that assembling a graphics pipeline could be done in a very much similar fashion.

auto pipeline = ibo_to_primitives >> model_to_world >> world_to_camera >> frustrum_culling >> camera_to_screen >> z_culling >> shade;

All of these pipeline stages are stateful so are best defined elsewhere. They all have acessors to buffers, they might own buffers (such as the Z-buffer), textures, whatever.

ps.: These pipelines are shader pipelines and have nothing to do with my other overdue pipeline write-up. All this when assembled is an atomic shader invocation, there is one associated PSO and one draw call. My other problem concerns giving a similar nice DSL to visually represent multiple draw/compute/IO, or generally any async task that may be executed concurrently, preferable even when multiple runtimes are involved (STL/CRT, MPI, Vulkan, OpenCL, OpenGL).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants