Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve glam source readablility #177

Closed
bitshifter opened this issue May 9, 2021 · 7 comments
Closed

Improve glam source readablility #177

bitshifter opened this issue May 9, 2021 · 7 comments
Assignees

Comments

@bitshifter
Copy link
Owner

bitshifter commented May 9, 2021

Internally glam got a lot more complex when support for other primitive types were added, e.g. f64, i32 & u32.

There are a couple of sources of complication:

  • The use of macros to generate common code and comments for most types
  • The core module which defines traits which are implemented for different storage types, which handles the actual implementation for vectors, quaternions and matrices for different primitive types and also SIMD architectures (currently just SSE2)

The purpose of both of these is to avoid a lot of code duplication between different types. Avoiding duplicating comments is almost as significant a motivator as avoiding code duplication. The old simple way that glam was written felt like it would not scale, at least not while the library is still evolving.

The macros are generally what users run into when they want to read glam source for some reason although it's likely the next hurdle will be the core crate traits.

One thought on how to make the source more readable is to use a templating language to generate the glam source instead of using Rust macros. The generated source would then be committed and that is what users would compile and view. Contributors would ideally update the template files, however PRs could be made to the generated source and then later ported back into the template generation.

The core crate feels more necessary still, I don't see an easy way to avoid that, but at least without the macros it might be easier to follow.

A few people have suggested https://github.com/dtolnay/cargo-expand for code gen. It's meant as a debugging tool but maybe it will do the job, or could be modified to support this use case.

Another suggestion was to look at rust-analyzer and cargo xtask generate.

@mickare
Copy link
Contributor

mickare commented Jun 6, 2021

Hey @bitshifter ,

that are some good thoughts. I think that the core the library is well designed for the current stable state of rust. I like the direct access to the storage values.

As you said, the macros are one big part that could be improved. But for me improving the macros is not all about the code readability. I'm missing the generic vectors that "magically" work. I would like a u8 vector, or custom type vectors.

One way, to achieve this, would be to move and separate each vector method (e.g. abs(), clamp(), is_nan(), etc.) into traits.
This could reduce the heavy macro usage by using generic implementations for known scalar types (SIMD as well).

I'm not sure about Rust best-practices, but I would group traits into definition and impl files depending on the functionality.


Here some examples, how it could look... (Click to expand)
Storage Traits
// Empty trait that marks a storage of a type T; could be extended with a Item type...
trait Storage<T>;

impl<T> Storage for Vec3<T> {}

trait Map<T, R>: Storage<T> {
  type Output: Storage<R>;

  fn map<F: Fn(T) -> R>(self, func: F) -> Self::Output;
}

trait Join<T, U, R, Rhs> : Storage<T> {
  type Output: Storage<R>;

  fn join<F: Fn(T, U) -> R>(self, other: Rhs, func: F) -> Self::Output;
}

...

impl<T,R> Map<T,R> for Vec3<T> {
  type Output = Vec3<R>;

  #[inline]
  fn map<F: Fn(T) -> R>(self, func: F) -> Self::Output {
     Self::Output::new(func(self.x), func(self.y), func(self.z))
  }
}

impl<T,U,R> Join<T,U,R, Vec3<U>> for Vec3<T> {
  type Output = Vec3<R>;

  #[inline]
  fn join<F: Fn(T, U) -> R>(self, other: Vec3<U>, func: F) -> Self::Output {
    Self::Output::new(func(self.x, other.x), func(self.y, other.y), func(self.z, other.z))
  }
}

Math Traits
impl<T: Add> Add for Vec3<T> {
  type Output = Self;

  #[inline]
  fn add(self, other: Self) -> Self::Output {
     self.join(other, <T as Add>::add)
  }
}

impl<T: Mul> Product for Vec3<T> {
  type Output = Self;

  #[inline]
  fn product(self, other: Self) -> Self::Output {
    self.join(other, <T as Mul>::mul)
  }
}

impl<T: Mul + Add> Dot for Vec3<T> {
  type Output = T;

  #[inline]
  fn dot(self, other: Self) -> Self::Output {
     self.product(other).sum()
  }
}

Scalar Traits
use num_traits::sign::Signed;

// ... definition of traits Abs, Clamp, etc.. with docstrings

impl<T: Signed> Abs for Vec3<T> {
  #[inline]
  fn abs(self) -> Self {
     self.map(<T as Signed>::abs)
  }
}

impl<T: PartialOrd> Clamp for Vec3<T> {
  #[inline]
  fn clamp(self, min_value: T, max_value: T) -> Self {
    let func = |v| v.max(min_value).min(max_value);
    self.map(func)
  }
}

@bitshifter bitshifter changed the title Improve glam soure readablility Improve glam source readablility Jun 15, 2021
@bitshifter
Copy link
Owner Author

glam has intentionally avoiding being a generic library. Instead it has focused on the most common use case (f32) and has expanded a bit from there due to demand to support some other primitive types such as f64, i32 and u32.

It's not clear from you example how this approach would work using a storage type of __m128 with a public scalar type of f32 as the traits only have one generic type T but surely two would be required?

In short, I think it is difficult to achieve both using explicit SIMD usage in combination with a generic interface and I think it would complicate the public interface greatly so it's not a direction I've ever pursued.

Internally I do use a lot of traits to implement functionality and to get some code reuse, however it is much more complicated and you tend to run into confusing compile errors. It has always been a goal of glam to provide a simple interface which doesn't impose this kind of complexity on users, so all of this complexity is internal only for now except for the case where users want to view the source.

Another problem of implementing everything via traits is the documentation. You can no longer view the docs for Vec3 and see all the methods it implements. Documentation is on the trait, not the struct so it's much harder for users to find all the methods a struct implements. You need to find the trait that implements the method you want and then find if it's implemented for the type you are interested in. cgmath is a good example of this issue, try and find the documentation for dot for https://docs.rs/cgmath/0.18.0/cgmath/struct.Vector3.html. You need to work out what trait has dot (there are multiple) and then work out which trait is implemented for Vector3.

@mickare
Copy link
Contributor

mickare commented Jun 23, 2021

After two weeks of experiments to implement a generic ndarray lib that has both const & dynamic structs (with the new const-generics) and views on part of the data, I must confess that generic trait implementations are the worst abominations that exists in rust (both in std and libs) and should be avoided at all costs. :goberserk: I learned the hard way.

And using macro impl for many type specifications of a generic struct can result in a docs monster. Just take a look what I found today: https://rust-lang.github.io/packed_simd/packed_simd_2/struct.Simd.html
Have fun. xD

~~ Brainstorming mode ON ~~
I get your point on the complexity and documentation issues of the "mini traits" approach. Maybe having a big generic trait with most methods could also reduce the duplicated docs between each Vec struct? It could keep most common methods on a single docs page?
E.g.: Expose the Vector / Matrix trait?


Response to the previous example questions....

It was just an idea, and not a solution. ;)

It's not clear from you example how this approach would work using a storage type of __m128 with a public scalar type of f32 as the traits only have one generic type T but surely two would be required?

Maybe it can be solved via a trait's associated type on T.
And to supoort SIMD, the non-SIMD arrays must be in block shape.

pub trait Block<T, const N: usize> {
... // whatever is needed
}
pub struct ArrayBlock<T, const N: usize>([T; N);
impl<T, const N: usize> Block<T, N> for ArrayBlock<T,N> {};
impl Block<u8, 2> for u8x2 {}
impl Block<u8, 4> for u8x4 {}
...
impl Block<i32, 2> for i32x2 {}
impl Block<i32, 4> for i32x4 {}
...
impl Block<f32, 4> for f32x4 {}


/// The type defines the storage type.
/// `const R` for the row size
pub trait StorageType<const R: usize> {
type Item;
type Block: Block<Self::Item, R>;
}

macro_rules! impl_simple_storage_r {
($t:ty, $r:expr) => {
  impl StorageType<$r> for $t {
     type Item = $t;
     type Block = ArrayBlock<$t, $r>;
   }
};
}

macro_rules! impl_simple_storage {
($t:ty) => {
   impl_simple_storage_r!($t, 1);
   impl_simple_storage_r!($t, 3);
   impl_simple_storage_r!($t, 5);
   impl_simple_storage_r!($t, 6);
   impl_simple_storage_r!($t, 7);
   ...
};
}
impl_simple_storage!(u8);
impl_simple_storage!(u16);
impl_simple_storage!(u32);
...
impl_simple_storage!(i64);

macro_rules! impl_simd_storage_r {
($t:ty, $r:expr) => {
  impl StorageType<$r> for $t {
     type Item = $t;
     type Block = Simd<[$t; $r]>;
   }
};
}

impl_simd_storage_r!(u8, 2);
impl_simd_storage_r!(u8, 4);
impl_simd_storage_r!(u8, 8);
...
impl_simd_storage_r!(i128, 2);
impl_simd_storage_r!(i128, 4);


pub struct Vec3<T: StorageType<2>>(T::Block);
pub struct Vec3<T: StorageType<3>>(T::Block);
pub struct Vec4<T: StorageType<4>>(T::Block);
...
pub struct Mat2<T: StorageType<2>>([Vec2<T>; 2]);
pub struct Mat3<T: StorageType<3>>([Vec3<T>; 3]);
pub struct Mat4<T: StorageType<4>>([Vec4<T>; 4]);


... // impl Constructors

impl<T: StorageType<4>> Add<Vec4<T>> for Vec4<T> where T::Block: Add<T::Block, Output=T::Block> {
type Output = Self;
fn add(self, rhs: Vec4<T>) -> Self::Output {
   Vec4(self.0.add(rhs.0))
}
}

fn example() {
let foo: Vec3<i32> = Vec3([0,1,2]); // no simd
let bar: Vec4<i32> = Vec4::new(0,1,2,3); // simd
}

This is without the nice direct value access (like vec.x).

In short, I think it is difficult to achieve both using explicit SIMD usage in combination with a generic interface and I think it would complicate the public interface greatly so it's not a direction I've ever pursued.

Yes, it looks ugly and the code is less readable, because the generic type bound has to be dragged along in all implementations.

@aloucks
Copy link
Contributor

aloucks commented Jan 14, 2022

I think the best compromise is to write a code generator, similar to what ash does to generate bindings from vk.xml. The resulting code may have duplication, but the source data to generate that code will not.

@klangner
Copy link

Out of curiosity.
What about approach similar to num_complex crate. where we have generic struct but implementation is generic or specific to type:

pub struct Complex<T> {
    /// Real portion of the complex number
    pub re: T,
    /// Imaginary portion of the complex number
    pub im: T,
}

impl<T> Complex<T>  {
}

impl<T: Clone + Num> Complex<T> {
}

impl<T: Float> Complex<T> {
}

I agree that finding the source code now is a little bit difficult. VSC jumps to the macro for example.
Could this work here?

@bitshifter
Copy link
Owner Author

@klangner glam could not support simd with that approach as it's a different type to T.

@bitshifter bitshifter self-assigned this May 30, 2022
@bitshifter
Copy link
Owner Author

I took the templating approach using Tera. Merged to main in #294.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants