New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Data ingested per day/week/month/year #544
Comments
I beleave hes referring to storage as how many gb is added to the nas each week. We could every item with addedat after xxx and figure out the size of thouse items. |
My bad. |
Yep, that's what I was talking about. Since I mainly use Plex on my file server the amount of drives I need to buy and the frequency I buy them all depends on how much data my server ingests so this would really help with that. |
I have started on this request. So far it supports a time range, weekdays, hour, month and so on. The call is a little expensive (200 Mb memory on a 25k media files) but but atlest it's fast. (Can be slow if your using a huge time range because of the sorting) I still have some mako work and js (the worst part imo left) |
What data are you calling? Can you just load the cached json from the media info tables? |
I just query the server for every item to get the file size.. I could use the cached files but I think that would use more memory since there is a lot of data in the cached files. I only cache about itemtype, and the time it was added. I'll test it. |
The cache should only be in the kilobytes. Did you find a way to speed up getting file sizes? |
Not using your method I needed to merge the data I'm pulling with the one in the db and the overhead was just to great resulting in memory error :(. Getting all the files sizes take like 3 sec. |
I'd love to be able to see the amount of data being added to Plex since other tools can show the amount of shows and which shows were added and when I'm hoping Plexpy could add to the graphs the amount of data being added per x amount of time as a way to gauge when I need to buy more storage.
The text was updated successfully, but these errors were encountered: