-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cached Queries #100
Comments
I think it's better to use third-party caching library that can cache objects. |
@angelxmoreno But the question is: why do you think a cache is needed? |
@gugglegum I attempted to use symfony cache and added a trait on my This is mainly the reason why I asked the question. To see if there is a known and working solution before I dug deeper. @froschdesign ty for pointing that out. That seems to be memoizing the objects but what I am looking for is a solution for storing data for a few minutes. Looking at the identityMap property does shed light on a few things given my previous failed implementation. Ty for highlighting this. It would be great if we had some documentation on how to implement some type of cached queries implementation. If I am successful I can write something but that is going to take me a little time. |
Why do you need this? Do you have performance problems or something else? |
@angelxmoreno I opine that the place for caching is in your domain layer repositories, not your persistence layer mappers. That is, cache your domain objects, not your persistence objects. Now, having said that, I presume that your motivation for caching is a performance issue of some sort. Is that the case? If so, are you able to describe it here? |
I am working on CLI scripts that generate reports based on a massive mysql db. The various reports paginate through a few million rows of data to generate these reports. I would like to increase the performance of the CLI scripts by caching frequently made sql calls. |
@angelxmoreno I don't think you need to go this way. In pagination you're fetching different data every time. So if you will cache them, you will get performance only on the next report generation. But in this case your reports may be outdated. Also, take a notice that if data is changing while you paginate it, the data in pages can be shifted. So some records can be fetched twice or some missed. Usually, when you making a report, you don't need to fetch whole database into your script, you need to use some JOINS and GROUP BY clauses to allow MySQL to prepare needle data for you. And even if you relally need to fetch and iterate big amount of data in your script, it's better to not use pagination. Use |
ty |
any plans or ideas of how to implement cached results, records or recordsets?
The text was updated successfully, but these errors were encountered: