Change the implementation of the cached_* functions to be structs that hide the cache implementation details#22
Conversation
…t hide the cache implementation details
|
I am unsure about my documentation of the panics. Mostly I don't really feel like I know what a "too large x value" is. I may have misunderstood the implementation, or just been unable to find what the limit on x values is. |
While I think it may be a good idea to do this, this is not the right PR for that, This reverts commit a638170.
…d a remove function to the caches.
|
I have added some convenience functions to the caches if the user wants to e.g. get the value corresponding to some inputs, but does not want to compute them if they are not in the cache. I was thinking about adding functions that iterate over the values in the cache in various ways, but that required a lot of code and changes to make the API nice, so I do not include them in this PR. The attempt is visible in commit 9e7f5b6. |
…ake with a set of inputs after removing them is recomputing them. And if so, why were they removed in the first place?
…u_Knu, like the other caches have
|
There may be other functionality one wishes to add, but I think this is a good first step. Some convenience functions and some tools for affecting memory allocation. |
|
Thank you for your excellent PR. I've reviewed the files, and the implementation of I have one suggestion: I noticed that all these structs have similar methods ( Since the implementations of these functions are also common, we can implement them at the trait level. This will significantly reduce the amount of code and eliminate duplication. Then, we would only need to separately implement the actual Bessel function calculation code for each struct. What do you think about this approach? I believe it will make the code more maintainable and substantially reduce duplication. Let me know your thoughts on this suggestion. |
|
Thank you for the feedback! It is a great idea to try to unify the implementation since it is the same for all the caches. I tried to do it with a trait but ran into the issue that each cache must implement the trait, which means that each trait impl essentially contains all the same code still, so it doesn't really save any code. Then the user must import the cache as well as the trait in order to use it. The last point is not much of a problem, but the first meant that the total amount of code actually increased. However, if there is a way to do this that I just missed I welcome feedback. I think I managed to kind of achieve your suggestion in another way though! I made a macro that creates an impl block that implements the common functionality. This way we only have to call the macro once per cache, and a bunch of code is saved. What do you think of this solution? |
|
Thank you for your implementation using macros. While it's a good effort to reduce code duplication, I believe we can achieve even better results using traits with default implementations. Let me explain: We can define a use std::collections::HashMap;
use std::hash::Hash;
type BesselKey = (u64, u64);
type BesselValue = (f64, f64, f64, f64);
pub trait Cached<K: Hash + Eq, V> {
// required methods
fn as_hashmap(&self) -> &HashMap<K, V>;
fn as_hashmap_mut(&mut self) -> &mut HashMap<K, V>;
fn new() -> Self;
fn with_capacity(capacity: usize) -> Self;
// common methods from required methods
// we can define default implementations for them
fn contains(&self, key: &K) -> bool {
self.as_hashmap().contains_key(key)
}
// Other common methods with default implementations...
}Then, for each cache struct like pub struct CachedBesselJY {
cache: HashMap<BesselKey, BesselValue>,
}
impl Cached<BesselKey, BesselValue> for CachedBesselJY {
fn as_hashmap(&self) -> &HashMap<BesselKey, BesselValue> {
&self.cache
}
fn as_hashmap_mut(&mut self) -> &mut HashMap<BesselKey, BesselValue> {
&mut self.cache
}
fn new() -> Self {
Self { cache: HashMap::new() }
}
fn with_capacity(capacity: usize) -> Self {
Self { cache: HashMap::with_capacity(capacity) }
}
}
impl CachedBesselJY {
pub fn calculate(&mut self, nu: f64, x: f64) -> BesselValue {
// Struct-specific implementation...
}
}This approach offers several advantages over using macros:
By using type aliases like This approach maintains the DRY principle while leveraging Rust's powerful type system and trait capabilities. For large projects with many cache types, the benefits of using traits over macros can be substantial, leading to more efficient development cycles and better long-term maintainability. What do you think about this approach? Would you like me to elaborate on any part of this explanation? P.S. Please note that I'm currently writing this on my mobile phone while commuting home by bus. As such, there might be some minor errors in the code examples. I appreciate your understanding, and please feel free to point out any issues you notice. Once I'm back at my desk, I'll be happy to review and refine the code if necessary. Thank you for your patience! |
|
Thank you for the feedback! This was one of the things I tried, however it unfortunately has one issue: it enables the user to obtain mutable access to the We could skip the |
|
Thank you for your insightful response. I apologize for the delay in getting back to you - I had a hospital appointment yesterday that prevented me from doing any work. I appreciate you pointing out that excellent point. You're absolutely right; by insisting on a trait implementation, we would inevitably expose the cache to the user. While users accessing Given this consideration, I concur that implementing with macros is indeed the better approach in this case. It allows us to maintain better control over the cache's invariants and prevent unintended modifications. Based on this, I'm happy to approve the merge of your pull request. Your implementation using macros is a solid solution that addresses the issues at hand while maintaining the necessary encapsulation. Once again, thank you for your excellent implementation and for sharing your thoughtful perspective on this matter. Your contribution is greatly appreciated. |
This PR switches the
cached_*functions out for structs that contain theHashMapinside themselves as discussed in #12. This way the user does not have to think about the implementation details, and can not accidentally insert values into the cache that mess with the results.This PR does not add tests.