New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implements in memory data store #20
Conversation
I thought I'd responded to this earlier, but I guess I didn't. :-) I like the idea of this, but it does need a little work. It passes your unit tests, but integration specs show a lot of failures (right now, all integration specs are in Another thing is that this doesn't implement the This also might give me the motivation I need to allow mappers to use different data sources, since there might be good use cases for production use of a memory store without requiring all of your objects to be stored in it. I also need to document things better. I was thinking about that the other day, that limiting documentation to the README probably doesn't help with collaboration. I'm a lazy documentator. :-) In all, I'm pretty excited about this PR. I was planning on waiting until a 1.0 release to support other types of data sources in order to have a more stable API to work with, but I think a memory store is simple enough that we can go ahead and pull it in once it's stable with the rest of the integration specs. Thanks! |
Also, I made a lot of assumptions in the code specifically because I was only working with MongoDB before. I'll be playing with this, too, to try and help get it working, so don't hesitate to speak up about anything that feels like the Mapper knows too much, because I'm sure it does. :-) |
Thanks for the feedback. I wanted to get this in quickly so we can start talking about it and find where the kinks are. I really like your approach of keeping the persistance logic out of the domain objects. To be honest, I didn't look into the mapper side of things. I'm planning on looking at them more next week. I'll get the integration specs working as well. |
Awesome. Looking forward to it. I'll try to get some documentation going in the meantime. |
I've added some initial documentation to the wiki. My next step will be trying to refactor some of the Another thing that might be good for a full in-memory implementation would be to get rid of serialization entirely and just store the actual Ruby objects (but cloned in case of mutation). It would certainly improve performance. I noted something about letting mappers pick their serializers in another issue, but I think doing it for different data sources is just as important since different types of databases will have their own requirements for how data is stored. |
Implementing this as a singleton is really making it difficult to test both implementations at once. Any thoughts on how to do this better? |
This will iterate over them and run the tests for both require 'perpetuity'
mem_store = Perpetuity::Memory.new
mongodb = Perpetuity::MongoDB.new db: 'perpetuity_gem_test'
require 'test_classes'
[mem_store, mongodb].each do |storage_adapter|
before(:all) do
Perpetuity.configure { data_source storage_adapter }
end
describe Perpetuity, "with #{storage_adapter.class}" do
# All the tests that are currently there
end
end |
Now that I think about it, it might need a |
|
||
describe Perpetuity::MongoDB do | ||
context do | ||
Perpetuity.configure { Perpetuity::MongoDB.new db: 'perpetuity_gem_test' } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line's missing data_source
. :-)
Also, without putting these configure calls in before(:all) { }
blocks, you won't get the results you're looking for due to how RSpec evaluates spec examples. It will evaluate all of the describe blocks (context is an alias for describe) first before executing any examples, so any code not in a before
, after
or it
block runs on load, meaning it'll configure with a Perpetuity::Memory
, then with a Perpetuity::MongoDB
, and then execute the examples.
I've run into this at least a half-dozen times on various projects. :-)
require 'test_classes' | ||
|
||
describe Perpetuity do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I move this back into 1 big test or do you like the "shared_examples" approach? One upside to the "shared_example" approach is that we can implement each adapter piecemeal. Another would be some data store will have features others don't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something I was thinking about when reading your diff. I think it can be a good idea to take that kind of modular testing approach. I'm not 100% sure it's better, but I definitely don't hate it. :-)
The main thing I'm wondering is whether behaving too differently based on data source is a good thing. For example, we can indeed index an in-memory database, but it may not be necessary. I'm not sure. For now, go ahead with what makes you comfortable and we'll figure it out down the line.
Regardless of what we end up going with, it_behaves_like "crud"
is a hilarious line of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My stance is that mappers, not data sources, should be swappable. With this line of thinking, complex queries are done in the mapper instead of on it. Your mappers end up looking like
ArticleMapper.find_with_commenter commenter
instead of
ArticleMapper.select { |article| article.comments.include? commenter }
This makes it easier to reuse persistence querying logic as well as make the intent clearer.
The added benefit of this approach as well is that we could add a bunch of adapters quickly. Later, we can refactor code duplication out of the adapters as we see them instead of trying to figure them out now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: Apparently Cmd-Return submits the comment
Absolutely, using methods with intention-revealing names as selection scopes is great.
I'm not sure what you mean about mappers being swappable instead of data sources. I was thinking that you might want to store different types of objects in different data sources. For example, in a social app, you could store users in a SQL database, posts in a document database, activity feed in Redis.
We don't have the ability to use multiple data sources in Perpetuity yet, but if we did, it might look something like this:
Perpetuity.configure do
data_source(:mongodb) { Perpetuity::MongoDB.new(…) }
data_source(:redis) { Perpetuity::Redis.new(…) }
data_source(:psql) { Perpetuity::PostgreSQL.new(…) }
end
class UserMapper < Perpetuity::Mapper
data_source Perpetuity.data_source(:psql)
map User
end
class PostMapper < Perpetuity::Mapper
data_source Perpetuity.data_source(:mongodb)
map Post
end
class ActivityFeedEntryMapper < Perpetuity::Mapper
data_source Perpetuity.data_source(:redis)
map ActivityFeedEntry
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
… and if someone decides that the data source they specified for a particular type of object is no longer suited for their app, they can just switch to using the appropriate one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would make Perpetuity better than every other persistence gem out there currently would be the ability to swap mappers. The use cause would be swapping in the memory mappers for testing and the main data source for prod/dev.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we may be talking about the same idea, just in different ways. :-) You did point out something that I didn't mention above in my multiple-config post, though, but I'm not so sure swapping the mapper in different environments is the best way to go. Swapping data sources (from MongoDB to your Memory store) within that mapper based on environment is probably preferable. There may be methods in the mapper used as scopes and you don't want to duplicate that between dev and test environments.
We are definitely going to have to make some Rails-specific considerations for things like environments, though. We might need to create a perpetuity-rails
gem that extends the functionality of either the Perpetuity module or the Configuration class. And because I like to see example code … :-)
Perpetuity.configure do
environment :test do
data_source Perpetuity::Memory.new
end
environment :development do
data_source Perpetuity::MongoDB.new(…)
end
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mappers responsibility is to map an object to a data source. You will never get away from having different mappers for different data sources. It is not a lot to ask someone to rewrite all the mapper code when you want to swap data sources.
Possible a different approach to this is to have Mappers for Key-Value storage like ToyStore and then have Finders that do the complex querying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a mapper acts as a layer between objects and a data source. But you don't need to use a different mapper class to talk to different data sources; just configure the same mapper class to use a different one.
Different data sources will likely have different serialization requirements, sure. For example, in a SQL DB, the serialized attributes can't be hashes or arrays, whereas that's perfectly valid in a document DB. Another example is that serialization isn't required at all for a memory store since we can simply store objects. This does mean that serialization will have to be reworked to address this.
This is based on perpetuity-mongodb and Christopher Garvis's pull request (jgaskins/perpetuity#20) for an in-memory data store.
Closing since data-source adapters are no longer part of the core gem. |
No description provided.