Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mRemovedViewsCache is an array that holds views according to position #12

Closed
wants to merge 1 commit into from

Conversation

VincentKetelaars
Copy link

I have changed the mRemovedViewsCache such that it doesn't list views according to its type, but according to the position these views have in the adapter. In this manner the adapter will be provided with the same view, when it is requested from the cache. In the previous arrangement the lifo order of returning views has issues when views of the same type have not yet been created.
The array is resized on every onChanged.

@bdonahue
Copy link
Contributor

bdonahue commented Aug 3, 2014

This will result in a substantial amount of cache misses. All new items will always be a cache miss. The point of the cache is to provide ready to use views from the cache as the user scrolls down. This change no longer does that, what this does is only cache views that have already been seen.

This has two major issues.

  1. The cache grows linearly as the user scrolls which will lead to eventual Out of Memory error and performance degradation

  2. The cache doesn't work. The point of the cache it to avoid view inflation as that is very slow and caused lots of jank when done on the UI thread. The existing cache once full provided a very low miss rate and thus prevented inflation and view.findbyid which is also very slow. This configuration has a 100% miss rate on future items in the list thus scrolling down will cause significant jank.

@VincentKetelaars
Copy link
Author

When writing this I was not keeping in mind the inflation part, which I admit is an important consideration. Since the list will usually be rather small, I doubt that the memory strain is significant, but it does not scale like the current implementation, true enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants