Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-application system metadata cache reduces startup performance #306

Closed
Vogtinator opened this issue Feb 20, 2021 · 24 comments
Closed

Per-application system metadata cache reduces startup performance #306

Vogtinator opened this issue Feb 20, 2021 · 24 comments

Comments

@Vogtinator
Copy link
Contributor

Vogtinator commented Feb 20, 2021

I noticed that krunner uses over 150MiB of private RAM when idle, which I tracked down to be caused by the AppStream module.
Using heaptrack, I can see that most of that is allocated by AppStream::Pool::load and its callees.

I tracked that down further by running G_MESSAGES_DEBUG=all appstreamcli search xterm, which printed System cache is stale, ignoring it.. Running appstreamcli refresh-cache manually fixed that.

That's already done by the package manager on repo metadata downloads though, but looking at the generated files I noticed that those are locale specific! The locale the package manager runs as is in most cases not the user's locale. A local root shell uses POSIX, packagekitd uses C as locale, etc.. Even if it matched, the user could change it at any point to one different from the system locale and end up with a mysteriously slower system with double (!) the usual RAM use (krunner, plasmashell)

It's much more likely that applications running as the user interact with AppStream metainfo, so IMO it makes sense to keep the cache somewhere in ~, even if only as fallback if the system cache is not usable.

@ximion
Copy link
Owner

ximion commented Feb 20, 2021

That looks like a bug in KRunner - the cache is transient by default and stored in memory, unless a location is explicitly defined by the client. When running appstreamctl refresh the tool could do some effort to load the system locale even if the environment locale is set differently, but tbh I think the package manager should take care of setting the configured system locale properly instead (APT definitely does so, even with PackageKit).

KRunner could either explicitly specify a cache location, or let AppStream choose do that by setting as_pool_set_cache_location (pool, ":temporary") - the odd thing is: That option is default, rather than the "keep everything in memory" option which is :memory. Are you sure this is the issue? Unless KRunner overrides these settings, it should already offload all unused data to disk and keep as little as possible in memory.

@Vogtinator
Copy link
Contributor Author

Vogtinator commented Feb 21, 2021

That looks like a bug in KRunner - the cache is transient by default and stored in memory, unless a location is explicitly defined by the client.

appstreamcli search behaves the same as krunner and plasmashell and allocates >150MiB of RAM every time it is called.

When running appstreamctl refresh the tool could do some effort to load the system locale even if the environment locale is set differently, but tbh I think the package manager should take care of setting the configured system locale properly instead (APT definitely does so, even with PackageKit).

That won't help much as the system locale is not necessarily what the user specifies anyway.

KRunner could either explicitly specify a cache location, or let AppStream choose do that by setting as_pool_set_cache_location (pool, ":temporary") - the odd thing is: That option is default, rather than the "keep everything in memory" option which is :memory. Are you sure this is the issue? Unless KRunner overrides these settings,

This is the runner code: https://github.com/KDE/plasma-workspace/blob/19ba5f1cf34cbe5bfe71d3bf40e7cb8bae3520b7/runners/appstream/appstreamrunner.cpp#L139

It does Pool::load once followed by Pool::search - there aren't any other calls to AppStream FWICT.

it should already offload all unused data to disk and keep as little as possible in memory.

On every startup I see it parsing a lot of xml data, which seems to be the main source of the memory use. The result of that parsing doesn't appear to be cached anywhere, at least not across multiple starts or even multiple processes.

@ximion
Copy link
Owner

ximion commented Feb 21, 2021

Sooo... Creating an AsPool instance is cheap, but calling load() can be quite expensive, depending on how much stuff is cached. Furthermore, if the client didn't specify any extra settings, calling load() is even more expensive.
Also, load() means "reload", so it will throw away any old data and reload the pool from scratch, unless AS_CACHE_FLAG_NO_CLEAR is set.
As far as I understand, KRunner calls load() unconditionally on every single search call, which of course is super expensive. I think I can provide a patch for that, and also add an is_loaded function to AsPool so clients don't need to track that state explicitly.
Btw, AsPool should be threadsafe already (but the extra mutex doesn't hurt for sure).

@ximion
Copy link
Owner

ximion commented Feb 21, 2021

FWIW, could also be that I am reading the KRunner code wrong and it doesn't reload on every search - but at the moment, it looks like this is the case to me and this also would explain your issue really well.

@Vogtinator
Copy link
Contributor Author

Vogtinator commented Feb 21, 2021

Also, load() means "reload", so it will throw away any old data and reload the pool from scratch, unless AS_CACHE_FLAG_NO_CLEAR is set.

I don't think that is an issue here:

  • When the system cache can be used, memory use is fine
  • appstreamcli search behaves identically to krunner

Also, load() means "reload", so it will throw away any old data and reload the pool from scratch, unless AS_CACHE_FLAG_NO_CLEAR is set.

That's only relevant if the same process loads the pool multiple times, right? What I'm looking for is a cross-application cache somewhere in ~/.cache/ or so, transparently used like the system cache.

Btw, AsPool should be threadsafe already (but the extra mutex doesn't hurt for sure).

AFAICT it also protects warnedOnce

FWIW, could also be that I am reading the KRunner code wrong and it doesn't reload on every search - but at the moment, it looks like this is the case to me and this also would explain your issue really well.

It's called only once - when the first search happens. It's a function-scope static variable, so evaluated on the first call only.

@ximion
Copy link
Owner

ximion commented Feb 21, 2021

That's only relevant if the same process loads the pool multiple times, right? What I'm looking for is a cross-application cache somewhere in ~/.cache/ or so, transparently used like the system cache.

There is no cross-application cache, each one creates its own cache, as apps may individually change the pool contents. But there will be a cache per-app, so once the thing is loaded, memory usage should go down. I'll have a look if that works correctly later.

It's called only once - when the first search happens. It's a function-scope static variable, so evaluated on the first call only.

Yeah, I was dumb there... Shouldn't have read this code so late at night, my brain completely ignored the static keyword on that line.

So, the only way I see you getting this problem is if the pool doesn't free used memory once the cache was created, but keeps it. From a quick look at the code, this only ever happens if there was a problem with the cache itself, so we failed to store something in LMDB - but that doesn't seem to be the case here. I'll run some tests when I am back home.

@Vogtinator
Copy link
Contributor Author

Vogtinator commented Feb 21, 2021

There is no cross-application cache, each one creates its own cache, as apps may individually change the pool contents.

The system cache works fine though? Moving the system cache from /var/cache/app-info/cache/ into ~/.cache/ should work just fine FWICT.

But there will be a cache per-app, so once the thing is loaded, memory usage should go down. I'll have a look if that works correctly later.

But not persistent? At least that's what I'm seeing.

It's called only once - when the first search happens. It's a function-scope static variable, so evaluated on the first call only.

Yeah, I was dumb there... Shouldn't have read this code so late at night, my brain completely ignored the static keyword on that line.

Heh - I have a "no public bug reports/issues at night" rule to avoid that ;-)

So, the only way I see you getting this problem is if the pool doesn't free used memory once the cache was created, but keeps it.

I don't think the memory is leaked, but it's just never given back to the OS. So it's essentially lying around mostly unused after the pool got loaded. It's just high peak heap memory use, but using small allocations (for XML nodes?) which can't easily be unmapped by libc.

There's also a huge performance difference in appstreamcli search and krunner when comparing valid system cache vs. invalid system cache.

@ximion
Copy link
Owner

ximion commented Feb 21, 2021

Heh - I have a "no public bug reports/issues at night" rule to avoid that ;-)

A good rule - getting rid of an issue quickly is neat too though, especially for the reporter, and especially if my first reply was that this may not be an issue in my project but theirs ^^

Okay, I think I got to the bottom of this - and the solution surprised me quite a bit.
Here's a longer explanation:
I was running a simple search operation with appstreamcli, one time with a system cache, and one time without. With the system cache, memory usage (RSS) was at ~5MB, without it memory usage was at 30MB. So clearly something was wrong here. After playing around with Massif and Heaptrack and finding absolutely nothing wrong with AppStream (which has - according to Valgrind - also no memory leaks during this operation, nice!), I plotted a more detailed graph and at some point noticed that both the system-cache-using variant as well as the one that reloads all data were showing only 5MB of used memory before quitting.
This made me drill into the various memory pools used in some operation, and mostly GLib's g_slice system, with no results. However, while reading about how Linux handles memory, I read that the OS may not bother actually returning small chunks of memory until the memory really gets low on the OS. I actually knew that, but surely 15MB isn't a small chunk?
Both AppStream and libxml2 allocate a lot of small memory chunks though, and especially libxml is a bit crazy with that - and we use XML parsing excessively if the cache isn't populated yet. Since I was out of other ideas, I just threw a malloc_trim (0); into the code after the cache was created and - voilà - the memory usage was now exactly the same in both scenarios.

So, tl;dr: Your "huge memory usage" issue seems to be a mirage. Can you please try throwing a malloc_trim(0) into KRunner's code, to verify that this indeed works for your case, and I wasn't looking at the wrong issue the whole time? This certainly "fixed" the supposed high memory usage for my case.

I feels wrong for libappstream to call malloc_trim at all, but I'll do some research on that. In any case, memory chunks still assigned to libappstream could be reused faster, so trimming them may supposedly save memory but actually just make the program run slower.

@Vogtinator
Copy link
Contributor Author

You just confirmed my theory:

I don't think the memory is leaked, but it's just never given back to the OS. So it's essentially lying around mostly unused after the pool got loaded. It's just high peak heap memory use, but using small allocations (for XML nodes?) which can't easily be unmapped by libc.

The issue here is not just about memory use (which calling malloc_trim would indeed solve, but that only works with glibc), but there are also other downsides of not using a cache, like severely degraded performance as it has to parse all XML documents.

With system cache, appstreamcli search takes just 60ms. Without, 850.

@ximion
Copy link
Owner

ximion commented Feb 22, 2021

But the cache is used and created. Just the first run will be slower, as a new cache has to be created (and of course appstreamcli without system cache will create a new one all the time - a longer running programm though with one AsPool instance will reuse the same on-disk cache for all its runtime).

@Vogtinator
Copy link
Contributor Author

But the cache is used and created. Just the first run will be slower, as a new cache has to be created (and of course appstreamcli without system cache will create a new one all the time - a longer running programm though with one AsPool instance will reuse the same on-disk cache for all its runtime).

I think we're finally converging on what this issue is about. The system cache does not have this problem as it is properly shared between starts of the same application and also different applications. What I'm asking for is the same feature which doesn't require root privileges: A transparently used cache in ~/.cache/ or similar.

@ximion
Copy link
Owner

ximion commented Feb 22, 2021

Yes, but that is a wontfix issue, as the cache is per-application - if it was global, any change to the pool done by Discover or any other app would impact other applications, making fault finding difficult. Also, the caches are only writable by a single process, having multiple processes write to the same cache will fail. Also also, having two processes create a cache at the same time is pretty hard to do properly too.
In any case though, memory usage should be similarly low, so is your problem actually about memory usage, or initial startup performance of the application now?

@Vogtinator
Copy link
Contributor Author

Yes, but that is a wontfix issue, as the cache is per-application - if it was global, any change to the pool done by Discover or any other app would impact other applications, making fault finding difficult. Also, the caches are only writable by a single process, having multiple processes write to the same cache will fail. Also also, having two processes create a cache at the same time is pretty hard to do properly too.

Applications could just do an implicit appstreamcli refresh-cache on startup?

In any case though, memory usage should be similarly low, so is your problem actually about memory usage, or initial startup performance of the application now?

Mostly the memory use. Introducing a cache would fix both issues though.

@ximion
Copy link
Owner

ximion commented Feb 22, 2021

There is no increased memory use though, that's what my previous message was about - memory usage is exactly the same, no matter whether the system cache existed or didn't exist before (I measured it).
A per-user shared "system" cache is risky to do, as we would basically need to create a new, pristine pool first and load it in order to create it properly, as the pool under an application's control might have been modified. This will actually increase peak memory usage (but ultimately should end up being exactly the same as before).
I'll think about it, but in any case, this change would not at all help with memory usage, only with initial startup speed.

@Vogtinator
Copy link
Contributor Author

There is no increased memory use though, that's what my previous message was about - memory usage is exactly the same, no matter whether the system cache existed or didn't exist before (I measured it).

I think we're misunderstanding each other again. Without calling malloc_trim, peak memory use remains until the application quits. Currently it doesn't call that, which means that each process using AppStream wastes ~150MiB of RAM here until it exits.

A per-user shared "system" cache is risky to do, as we would basically need to create a new, pristine pool first and load it in order to create it properly, as the pool under an application's control might have been modified. This will actually increase peak memory usage (but ultimately should end up being exactly the same as before).

Only of a single application though and only on the first start after the metadata changed, to refresh the cache. After that, the cache would only be read.

Also, at least krunner and plasmashell just use a pristine pool, so this could be detected and just written to disk directly.

I'll think about it, but in any case, this change would not at all help with memory usage, only with initial startup speed.

If the cache is used, it doesn't do any XML parsing and thus not allocate 150MiB of memory during that. See also the issue description.

@ximion
Copy link
Owner

ximion commented Feb 22, 2021

I think we're misunderstanding each other again. Without calling malloc_trim, peak memory use remains until the application quits. Currently it doesn't call that, which means that each process using AppStream wastes ~150MiB of RAM here until it exits.

So, have you tried that now? If this works for your case, it means that memory has already been freed and isn't used anymore, and is just associated with the process in case that needs it back quickly. It's a performance feature of glibc. malloc_trim only reduces fake memory usage, not actual memory usage (it's kind of a userspace variant of https://www.linuxatemyram.com/).

@Vogtinator
Copy link
Contributor Author

malloc_trim only reduces fake memory usage, not actual memory usage (it's kind of a userspace variant of https://www.linuxatemyram.com/).

No, this is actually about real memory usage. The allocated pages were written to, so the kernel can't just throw them away. If at all, they could be swapped out to move them out of main RAM, but they can't simply be discarded. This really is about 150MiB of wasted memory per process.

@ximion
Copy link
Owner

ximion commented Feb 22, 2021

I really do wonder whether we are talking past each other or actually about completely different things.
I just compiled KRunner (Plasma 5.21), standard debug build. I ensured the system cache was gone, started krunner and searched for "web". The AppStream per-user temporary cache was created, seems to be reused correctly as long as krunner is alive, and the total memory usage after the one search (RSS) was 78MB.
I then added a malloc_trim call after the AppStream pool was loaded and repeated the same procedure. After the search, the memory usage was 66MB. Not a huge reduction, and very far off from your 150MB.

Repeating the exact same thing with the system cache yielded the same results, with the system cache in use, malloc_trim of course did a bit less, but overall memory consumption was very reasonable.

This was all with AppStream's Git master though, and I recently and completely by accident fixed a huge memory leak in AppStream's Qt bindings: Basically, whenever you searched anything that could return multiple results, the search results were never properly freed. So the more searches you did, the more memory you would use.
I wonder whether your issue is actually this problem, and has nothing to do with the caches...
The memory leak patch in question is b576c61 (a bit misnamed, because I forgot to split the patch in two after fixing all the leaks)

I am against adding malloc_trim calls to libappstream as they will hurt allocation performance and as a library I have no idea how the calling process will actually use memory. Sharing a per-user-system-cache between processes may be possible, but it will be quite a bit of work to implement properly, and libappstream will still have to parse a lot of XML and will write a transient per-user cache, so it may not actually properly fix your issue (btw, if you can see where the per-app cache is located in ~/.cache in detailed memory information - so, caching definitely works)

@Vogtinator
Copy link
Contributor Author

I really do wonder whether we are talking past each other or actually about completely different things.
I just compiled KRunner (Plasma 5.21), standard debug build. I ensured the system cache was gone, started krunner and searched for "web". The AppStream per-user temporary cache was created, seems to be reused correctly as long as krunner is alive, and the total memory usage after the one search (RSS) was 78MB.

With system cache, RssAnon grows from ~34MiB to ~47MiB, without system cache it grows to ~150MiB. Maybe you just have way less metainfo to load?

This was all with AppStream's Git master though, and I recently and completely by accident fixed a huge memory leak in AppStream's Qt bindings: Basically, whenever you searched anything that could return multiple results, the search results were never properly freed. So the more searches you did, the more memory you would use.
I wonder whether your issue is actually this problem, and has nothing to do with the caches...
The memory leak patch in question is b576c61 (a bit misnamed, because I forgot to split the patch in two after fixing all the leaks)

I don't think that's related (though it's definitely good to know!), because my test search is "xterm"/"asdf" and neither of those produce any results. appstreamcli search has identical behaviour regarding memory growth and AFAICT that doesn't use the Qt bindings.

I am against adding malloc_trim calls to libappstream as they will hurt allocation performance and as a library I have no idea how the calling process will actually use memory.

I agree. malloc_trim is not something a library should have to do. The best option is to just avoid allocating that much in the first place ;-)

Sharing a per-user-system-cache between processes may be possible, but it will be quite a bit of work to implement properly,

Is appstreamcli refresh-cache atomic? If so, it would be possible to just move the system cache location into ~/.cache/app-info and call appstreamcli refresh-cache on startup (when loading a pool), unconditionally.

and libappstream will still have to parse a lot of XML and will write a transient per-user cache, so it may not actually properly fix your issue (btw, if you can see where the per-app cache is located in ~/.cache in detailed memory information - so, caching definitely works)

Currently it wastes 150MiB of RAM in every process using AppStream everytime, but with proper caching it would only do that once for a single process after each metainfo change.

@ximion ximion changed the title No caching in user applications Per-application system metadata cache reduces startup performance Feb 23, 2021
@ximion
Copy link
Owner

ximion commented Feb 23, 2021

With system cache, RssAnon grows from ~34MiB to ~47MiB, without system cache it grows to ~150MiB. Maybe you just have way less metainfo to load?

Definitely not, I deliberately have tons of data. Most of it is in YAML format though. You are on OpenSUSE, I guess? In that case you would have more XML data, and we allocate and free even more small memory chunks when parsing XML.
This memory is effectively free'd aleady though, so you could just throw a malloc_trim into krunner if that bothers you.

Is appstreamcli refresh-cache atomic?

It definitely isn't at the moment. And generating that cache also wouldn't really help you much - krunner is one of the earliest apps to start, so it would usually be the application that creates this cache in the first place. Which helps KDE Discover and other app's startup performance, but that would be about it.
Sharing caches between applications feels very risky, especially if one could delete the cache or write to it any time. The "write-to-temp, delete original, rename file" dance might work here to make this atomic, if LMDB actually handles this correctly, which I don't know.

Still, I don't get your memory issue - if we once requested a lot of memory, then freed it again, surely the OS will be able to reclaim the free pages now that we don't use them anymore... If malloc_trim is required to do that (I recently found http://xmlsoft.org/xmlmem.html which makes it seem like that), then krunner should just call this so it uses the minimal amount of memory even if it just wrote its temporary cache (which is will always do, actually, even with a system cache active). Might make future heap allocations slower though for a bit of time.

@Vogtinator
Copy link
Contributor Author

With system cache, RssAnon grows from ~34MiB to ~47MiB, without system cache it grows to ~150MiB. Maybe you just have way less metainfo to load?

Definitely not, I deliberately have tons of data. Most of it is in YAML format though. You are on OpenSUSE, I guess? In that case you would have more XML data, and we allocate and free even more small memory chunks when parsing XML.
This memory is effectively free'd aleady though, so you could just throw a malloc_trim into krunner if that bothers you.

Yes, the 150MiB are all caused by allocations inside libxml. If you need some data to reproduce the issue, take one of the files from http://download.opensuse.org/tumbleweed/repo/oss/repodata/?P=*appdata.xml.gz and put it into /var/cache/app-info/xmls/.

Is appstreamcli refresh-cache atomic?

It definitely isn't at the moment. And generating that cache also wouldn't really help you much - krunner is one of the earliest apps to start, so it would usually be the application that creates this cache in the first place. Which helps KDE Discover and other app's startup performance, but that would be about it.

Neither krunner nor plasmashell call AppStream::load on startup, but only when it's actually needed (relevant search term). It would have full effect on subsequent logins, as long as the cache doesn't have to be refreshed.

Sharing caches between applications feels very risky, especially if one could delete the cache or write to it any time. The "write-to-temp, delete original, rename file" dance might work here to make this atomic, if LMDB actually handles this correctly, which I don't know.

Yeah, that should work, as long as LMDB cooperates and doesn't try to do any path based access after opening the cache initially.

Still, I don't get your memory issue - if we once requested a lot of memory, then freed it again, surely the OS will be able to reclaim the free pages now that we don't use them anymore...

That depends on what the libc memory allocator does. Generally it only unmaps (i.e. gives back) freed memory it mapped directly, which is usually done only for big allocations done with mmap (if you want to know more, search for MALLOC_MMAP_THRESHOLD), not for those using sbrk. Small allocations almost always use the latter, which is why those have such an effect here. This is a really deep rabbit hole with gory details inside (buckets, freelists, per-thread pools, ...).

If malloc_trim is required to do that (I recently found http://xmlsoft.org/xmlmem.html which makes it seem like that), then krunner should just call this

Yes, I'm thinking about that as a workaround, might even get accepted upstream. I'll play around with M_TRIM_THRESHOLD/M_TOP_PAD a bit, that might also work.

Neither of those would benefit of the 10x performance increase the cache provides though.

so it uses the minimal amount of memory even if it just wrote its temporary cache (which is will always do, actually, even with a system cache active).

You got to the main point of this bug report: If this behaviour is changed, the issue would mostly disappear.

Might make future heap allocations slower though for a bit of time.

@ximion ximion closed this as completed in 194e315 Feb 26, 2021
@ximion
Copy link
Owner

ximion commented Feb 26, 2021

I still think the memory argument is very weak, but the startup performance argument is very real. AppStream's caching code will get a major rewrite soon (some time this year maybe...). Fortunately the current caching code is rather easy to read, with the exception of some insane locking going on in AsPool (but that's a mess to untangle at a different time).

The current implementation will share a cache for system data between all apps, either using the system one if it's up-to-date, or otherwise creating one per-user. This operation should be atomic (and seems to work fine in my limited tests).

This change is very invasive though, so please give it a lot of testing!

@Vogtinator
Copy link
Contributor Author

Finally had some time to get back to this.

I still think the memory argument is very weak,

For YAML I agree, as apparently that apparently doesn't need nearly as much memory allocation.

but the startup performance argument is very real. AppStream's caching code will get a major rewrite soon (some time this year maybe...). Fortunately the current caching code is rather easy to read, with the exception of some insane locking going on in AsPool (but that's a mess to untangle at a different time).

The current implementation will share a cache for system data between all apps, either using the system one if it's up-to-date, or otherwise creating one per-user. This operation should be atomic (and seems to work fine in my limited tests).

\o/ Thanks a lot!

I had a look at the code and might have found some issues (or just misunderstanding code):

  • !g_str_has_prefix (dir, "/home/"); isn't always accurate - home directories can be anywhere
  • as_pool_refresh_system_cache creates the temporary cache file g_mkstemp (cache_fname_tmp); but then calls as_pool_cleanup_cache_dir (pool, sys_cache_dir); which will immediately delete it again. It's probably recreated by as_cache_open, but due to the deletion that's no longer atomic.
  • This cleaning of the cache dir may also race with other applications doing as_pool_refresh_system_cache at the same time.
  • If g_rename (cache_fname_tmp, cache_fname) fails, it still thinks the cache was updated and does a as_touch_location (cache_fname);. In this case that's the old cache file though.

This change is very invasive though, so please give it a lot of testing!

I built latest master (49b937c) and played around a bit with appstreamcli, krunner and plasmashell. I couldn't find any issues so far. Only thing I noticed is that when running appstreamcli search for the first time, it takes about twice as long as appstreamcli refresh-cache + appstreamcli search combined! It's like it creates the cache but then doesn't use it and loads everything again anyway.

localhost:~ # rm -rf ~/.cache/appstream /var/cache/app-info/cache
localhost:~ # time (appstreamcli refresh-cache; appstreamcli search discover) >/dev/null
The AppStream system cache was updated, but some components were ignored. Refer to the verbose log for more information.

real    0m0,932s
user    0m0,817s
sys     0m0,116s
localhost:~ # rm -rf ~/.cache/appstream /var/cache/app-info/cache
localhost:~ # time appstreamcli search discover >/dev/null

** (appstreamcli:3308): WARNING **: 15:33:19.523: Unable to refresh system cache: The AppStream system cache was updated, but some components were ignored. Refer to the verbose log for more information.

real    0m1,923s
user    0m1,724s
sys     0m0,189s

After that, the cache is established and appstreamcli search returns in just ~70ms each time.

I also noticed this rather odd wording: Unable to refresh system cache: The AppStream system cache was updated, but...

Is there anything else I should test specifically?

I suppose the cache refreshing done on repo updates by the package manager could be dropped now. This way the cache is only generated when it's actually needed by an application and also always in the correct locale. The additional ~1s on initial cache creation happens in separate threads anyway (at least in discover, plasmashell, krunner), so shouldn't be an issue.

@Vogtinator
Copy link
Contributor Author

Apparently the fix for this is part of the latest release now. Were any of the issues I found addressed?

ximion added a commit that referenced this issue Jun 22, 2021
This apparently happens some times, and with cold caches it means we
will load all data twice, leading to a pretty big startup delay.
CC: #306
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants