Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching seems not to work at all #11378

Closed
3 tasks done
schlensen opened this issue Feb 1, 2022 · 10 comments
Closed
3 tasks done

Caching seems not to work at all #11378

schlensen opened this issue Feb 1, 2022 · 10 comments
Labels

Comments

@schlensen
Copy link

schlensen commented Feb 1, 2022

Preflight Checklist

Describe the Bug

Digging very deep into directus, the database and especially the cache (using both redis-cache and in-memory) I've noticed a very strange behaviour of the caching-mechanism.

At first the cache-settings by config.json passed into the directus-docker-container:

    "CACHE_ENABLED": true,
    "CACHE_TTL": "30m",
    "CACHE_NAMESPACE": "directus-cache",
    "CACHE_SCHEMA": false,
    "CACHE_AUTO_PURGE": true,
    "CACHE_STORE": "redis",
    "CACHE_REDIS_HOST": "host",
    "CACHE_REDIS_PORT": 6379,
    "CACHE_REDIS_PASSWORD": "pw",
    "CACHE_REDIS_TLS": {}

We are using a setup in the AWS (EC2-Linux-Instances, ElastiCache using Redis, RDS Aurora Serverless PostgreSQL10, EFS) but we were also able to reproduce the following behaviour locally with in-memory-cache and local-redis.

I've created a collection "Test" and added one item using the app. I connected to the redis-cache and run the "monitor"-call to show the redis-log.

Then i called directus on the items-endpoint to request the Test-items via "/items/Test":

postman-testcustomer-get-request

The according log in the redis-cache looks as follows:

cache-write-get-request

As you can see there is no "get" on a specific cache item except "permissions", but the actual content-item is not requested from the cache. But what is really strange to me is that the item itself is written to the cache again ("set") while it wasn't changed (GET-Request).

This behaviour is constant firing multiple GET-Requests to "/items/Test", the only thing changes is the expiration-date which increases from request to request (which would be correct in case of create or update):

cache-write-get-request-multiple

The last thing I've tested was to change the value of the Test-item right in the postgres-database (postgres10) without using the api or the app at all:

Test-item-database-change

and called the Test-items again (there were 2 items at this point):

Test-items-after-database-change

The manual changes in the database were present by api immediately so the items were requested right from the database, not by cache.

To Reproduce

First approach

  • Setup directus with cache
  • connect to the cache-log (follow)
  • Insert an item
  • Request the item by api-call "/items/{collection}"
  • Check the cache-log:
    • No "get"-call on the cache should be present
    • There should be a "set"-call to write the requested item to the cache again and again

Second approach

  • Setup directus with cache
  • Perform API-call on a test-item
  • Connect to the database
  • Change a value of the test-item right in the database
  • Request the same item per API
  • The item should contain the changes made in the database (expected: no cache included)

Errors Shown

No response

What version of Directus are you using?

9.4.3 (production), 9.5.0 (local dev-environment)

What version of Node.js are you using?

16.13.1

What database are you using?

Postgres 10.14

What browser are you using?

Chrome

What operating system are you using?

macOS (locally), Amazon-Linux (production)

How are you deploying Directus?

Docker

@eXsiLe95
Copy link
Contributor

eXsiLe95 commented Feb 2, 2022

Can confirm that on GET requests, the cache is not read; instead the database values are shown.

Tested it by changing a value of the field of an item in the database itself (not via API/app) and got the "correct" values from the database even though the cache still holds a representation of this item (with a valid/not expired expiration date). Would have expected to rightfully get the cached version of the item here.

EDIT: Directus 9.4.3 with PostgreSQL 10.14

@licitdev
Copy link
Member

licitdev commented Feb 3, 2022

@schlensen Please try to remove "CACHE_AUTO_PURGE": true from your config and confirm if the issue persists. Thank you.

@viters
Copy link
Contributor

viters commented Feb 3, 2022

I am experiencing the same issue. Redis run on the Kubernetes alongside Directus, irregardless of CACHE_AUTO_PURGE being present, true or false or absent - it does not make difference. Changing stuff on database is immediately visible on API response.

@licitdev
Copy link
Member

licitdev commented Feb 3, 2022

We will have to take a closer look. Thank you!

Notes

When accessing from the admin portal at /admin/content/{collection}/{id}, the cache for the item entry's cache at items/{collection}/{id} is updated to the latest content even when CACHE_AUTO_PURGE=false.

@viters
Copy link
Contributor

viters commented Feb 3, 2022

@licitdev
I am having slightly different use case though. I monitored redis and I see that Directus caches permissions, but I see no "set" calls using graphql API. According to docs, that should not be a case:

image

@viters
Copy link
Contributor

viters commented Feb 3, 2022

@licitdev
As my problem is different to this issue, I moved my findings to #11418

@CoenWarmer
Copy link

Any update on this @licitdev @viters?

@schlensen
Copy link
Author

@licitdev It's really strange - I can't reproduce the behaviour anymore. If I request items now the entries I see in the redis-logs only are:

1644405315.127890 [xxx:56532] "get" "directus-cache:c209cdb1097d2d201b4c34ccb2609c34f96fe2ea"
1644405315.130388 [xxx:56532] "get" "directus-cache:c209cdb1097d2d201b4c34ccb2609c34f96fe2ea__expires_at"

I cannot explain why, I did not change the directus-version nor the auto-purge-flag or anything else. We had a massive delete-operation on this system (about 30.000 items at once) several days ago and due to this directus got stuck and crashed. Maybe this operation "confused" the cache and database.

For now I'd say everything is fine again. I'll report if we can reproduce the problem in the future.

Thanks for the great support here and keep it up!

Jan

@viters
Copy link
Contributor

viters commented Feb 9, 2022

@CoenWarmer
I've found the issue in #11418 and created a PR #11462, but that's a different problem than the one in this task. Although I cannot reproduce that specific problem, because when I use GET for /graphql request, caching is fine.

@azrikahar
Copy link
Contributor

For now I'd say everything is fine again. I'll report if we can reproduce the problem in the future.

Closing this in accordance with OP's update, and we'll reopen this when it the problem does occur again in the future. Plus the different issue reported in this thread has been tracked in a separate issue.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

6 participants