Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(http prober): #1285 implement cache http response #1296

Merged

Conversation

syamsudotdev
Copy link
Contributor

@syamsudotdev syamsudotdev commented May 29, 2024

Monika Pull Request (PR)

This PR resolves #1285

What feature/issue does this PR add

  1. Added new file src/components/probe/prober/http/response-cache.ts : in-memory cache for HTTP response

How did you implement / how did you fix it

  1. Create in-memory cache with @isaac/ttl-cache
  2. Changed HTTPProber to use cached response on first probing attempt
  3. Added new CLI option --ttl-cache to set cache's time-to-live
  4. Added new CLI option --verbose-cache to monitor cache use

How to test

  1. npm run start -- --verbose-cache
  2. Observe messages for Cache HIT and Cache MISS

image

@syamsudotdev syamsudotdev self-assigned this May 29, 2024
Copy link

codecov bot commented May 29, 2024

Codecov Report

Attention: Patch coverage is 43.58974% with 22 lines in your changes missing coverage. Please review.

Project coverage is 63.53%. Comparing base (6a29470) to head (ab0442b).
Report is 16 commits behind head on main.

Files Patch % Lines
src/components/probe/prober/http/response-cache.ts 25.92% 18 Missing and 2 partials ⚠️
src/components/probe/prober/http/index.ts 88.88% 0 Missing and 1 partial ⚠️
src/components/probe/prober/http/request.ts 66.66% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1296      +/-   ##
==========================================
+ Coverage   62.51%   63.53%   +1.01%     
==========================================
  Files         112      109       -3     
  Lines        3391     3409      +18     
  Branches      591      580      -11     
==========================================
+ Hits         2120     2166      +46     
+ Misses       1079     1058      -21     
+ Partials      192      185       -7     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@haricnugraha
Copy link
Contributor

Wouldn't it be easier if we use the library with tested code such as https://www.npmjs.com/package/@isaacs/ttlcache?

@syamsudotdev
Copy link
Contributor Author

Wouldn't it be easier if we use the library with tested code such as https://www.npmjs.com/package/@isaacs/ttlcache?

@haricnugraha Updated cache backend with said package

Copy link
Contributor

@sapiderman sapiderman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, ok to merge.
btw: Do we have memory consumption for say a cache of 100 probe reponses? 1000?

@syamsudotdev
Copy link
Contributor Author

btw: Do we have memory consumption for say a cache of 100 probe reponses? 1000?

@sapiderman A typical web page like youtube and sentry.io is around 500 kilobytes for their html text.

curl -L https://sentry.io > test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  557k    0  557k    0     0   363k      0 --:--:--  0:00:01 --:--:--  711k

For 100 probes of web pages, I'd say it would be around 50 megabytes.

@syamsudotdev syamsudotdev merged commit 18ec55c into hyperjumptech:main Jun 11, 2024
7 checks passed
@syamsudotdev syamsudotdev deleted the issue/1285-cache-http-response branch June 11, 2024 14:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cache the responses of identical requests to prevent multiple requests within a short time
5 participants