Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.

Conversation

GavinPHR
Copy link
Contributor

@GavinPHR GavinPHR commented Nov 5, 2022

Description

Adding a decorator cache the input/output of a handler.
It is assumed that:

  • A redis server is running
  • Both the input and output of the decorated function can be pickled

A typical usage would be:

from ts.utils.redis_cache import handler_cache

class SomeHandler(BaseHandler):
    def __init__(self):
        ...
        self.handle = handler_cache(host='localhost', port=6379, db=0, maxsize=2)(self.handle)

Type of change

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Manually tested, difficult to do automated test without Redis in testing environment.

Checklist:

  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Nov 5, 2022

Codecov Report

Merging #1952 (819d85a) into master (a8ff888) will increase coverage by 8.64%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #1952      +/-   ##
==========================================
+ Coverage   44.66%   53.31%   +8.64%     
==========================================
  Files          63       70       +7     
  Lines        2624     3157     +533     
  Branches       56       56              
==========================================
+ Hits         1172     1683     +511     
- Misses       1452     1474      +22     
Impacted Files Coverage Δ
ts/metrics/metric.py 80.64% <0.00%> (-8.65%) ⬇️
ts/arg_parser.py 25.80% <0.00%> (-3.23%) ⬇️
ts/model_loader.py 80.48% <0.00%> (-0.69%) ⬇️
ts/service.py 78.26% <0.00%> (-0.62%) ⬇️
ts/metrics/metrics_store.py 92.98% <0.00%> (ø)
ts/tests/unit_tests/test_worker_service.py 100.00% <0.00%> (ø)
ts/tests/unit_tests/test_beckend_metric.py
ts/metrics/metric_abstract.py 92.85% <0.00%> (ø)
ts/metrics/metric_cache_abstract.py 92.77% <0.00%> (ø)
ts/metrics/caching_metric.py 89.47% <0.00%> (ø)
... and 8 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@msaroufim
Copy link
Member

Looking good please create a separate folder in example. Call it redis and then have a brief README with a usage example

@GavinPHR GavinPHR marked this pull request as ready for review November 6, 2022 23:21
Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM please address the minor feedback, post some logs of this thing working and we should be good to merge this

EDIT: Also show how to start the server


Note that if the pre-requisites are not met, a no op decorator will be used and no exceptions will be raised.

We will now assume a Redis server is started on `localhost` at port `6379`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

show how to start it

@GavinPHR
Copy link
Contributor Author

GavinPHR commented Nov 7, 2022

Commands to package and serve (same as in README):

torch-model-archiver --model-name mnist --version 1.0 --model-file examples/image_classifier/mnist/mnist.py --serialized-file examples/image_classifier/mnist/mnist_cnn.pt --handler  examples/redis_cache/mnist_handler_cached.py
mkdir -p model_store
mv mnist.mar model_store/
torchserve --start --model-store model_store --models mnist=mnist.mar --ts-config examples/image_classifier/mnist/config.properties

Query (same image twice):

curl http://127.0.0.1:8080/predictions/mnist -T examples/image_classifier/mnist/test_data/0.png; curl http://127.0.0.1:8080/predictions/mnist -T examples/image_classifier/mnist/test_data/0.png

Log for when Redis server is NOT started, a no op decorator is used.
With ENABLE_TORCH_PROFILER=true, we see that the profiles are printed twice.

2022-11-07T17:26:51,079 [INFO ] W-9007-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1667842011079
2022-11-07T17:26:51,089 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - Backend received inference at: 1667842011
2022-11-07T17:26:51,090 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - Saving chrome trace to : /tmp/pytorch_profiler/mnist
2022-11-07T17:26:51,103 [WARN ] W-9007-mnist_1.0-stderr MODEL_LOG - /Users/haoranpeng/mambaforge/lib/python3.10/site-packages/torch/nn/functional.py:1331: UserWarning: dropout2d: Received a 2-D input to dropout2d, which is deprecated and will result in an error in a future release. To retain the behavior and silence this warning, please use dropout instead. Note that dropout2d exists to provide channel-wise dropout on inputs with 2 spatial dimensions, a channel dimension, and an optional batch dimension (i.e. 3D or 4D inputs).
2022-11-07T17:26:51,103 [WARN ] W-9007-mnist_1.0-stderr MODEL_LOG -   warnings.warn(warn_msg)
2022-11-07T17:26:51,112 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                              Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg    # of Calls  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                        preprocess        64.92%       6.844ms        67.28%       7.093ms       7.093ms             1  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                         inference         8.37%     882.000us        30.80%       3.247ms       3.247ms             1  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                      aten::linear         0.16%      17.000us        15.40%       1.623ms     811.500us             2  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                       aten::addmm        14.85%       1.566ms        14.92%       1.573ms     786.500us             2  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                      aten::conv2d         0.15%      16.000us         4.42%     466.000us     233.000us             2  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                 aten::convolution         0.09%      10.000us         4.27%     450.000us     225.000us             2  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                aten::_convolution         1.45%     153.000us         4.17%     440.000us     220.000us             2  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                 aten::thnn_conv2d         0.09%      10.000us         2.65%     279.000us     279.000us             1  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 33
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -        aten::_slow_conv2d_forward         2.27%     239.000us         2.55%     269.000us     269.000us             1  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG -                  aten::max_pool2d         0.17%      18.000us         1.39%     147.000us     147.000us             1  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - Self CPU time total: 10.542ms
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_LOG - 
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0 ACCESS_LOG - /127.0.0.1:51469 "PUT /predictions/mnist HTTP/1.1" 200 34
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_METRICS - HandlerTime.Milliseconds:23.3|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:a763b1cf-655e-4f82-a298-7d22e7da4c60,timestamp:1667842011
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:28.5|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:a763b1cf-655e-4f82-a298-7d22e7da4c60,timestamp:1667842011
2022-11-07T17:26:51,113 [INFO ] W-9007-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667841674
2022-11-07T17:26:51,113 [DEBUG] W-9007-mnist_1.0 org.pytorch.serve.job.Job - Waiting time ns: 190042, Backend time ns: 34394333
2022-11-07T17:26:51,114 [INFO ] W-9007-mnist_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842011
2022-11-07T17:26:51,114 [INFO ] W-9007-mnist_1.0 TS_METRICS - WorkerThreadTime.ms:2|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842011
2022-11-07T17:26:51,148 [INFO ] W-9000-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1667842011148
2022-11-07T17:26:51,153 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - Backend received inference at: 1667842011
2022-11-07T17:26:51,153 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - Saving chrome trace to : /tmp/pytorch_profiler/mnist
2022-11-07T17:26:51,164 [WARN ] W-9000-mnist_1.0-stderr MODEL_LOG - /Users/haoranpeng/mambaforge/lib/python3.10/site-packages/torch/nn/functional.py:1331: UserWarning: dropout2d: Received a 2-D input to dropout2d, which is deprecated and will result in an error in a future release. To retain the behavior and silence this warning, please use dropout instead. Note that dropout2d exists to provide channel-wise dropout on inputs with 2 spatial dimensions, a channel dimension, and an optional batch dimension (i.e. 3D or 4D inputs).
2022-11-07T17:26:51,165 [WARN ] W-9000-mnist_1.0-stderr MODEL_LOG -   warnings.warn(warn_msg)
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                              Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg    # of Calls  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                        preprocess        58.84%       5.629ms        61.30%       5.865ms       5.865ms             1  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 28
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                         inference         7.80%     746.000us        34.55%       3.305ms       3.305ms             1  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                      aten::linear         0.16%      15.000us        15.81%       1.513ms     756.500us             2  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                       aten::addmm        15.23%       1.457ms        15.32%       1.466ms     733.000us             2  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                      aten::conv2d         0.20%      19.000us         7.45%     713.000us     356.500us             2  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                 aten::convolution         0.11%      11.000us         7.25%     694.000us     347.000us             2  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                aten::_convolution         2.31%     221.000us         7.14%     683.000us     341.500us             2  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0 ACCESS_LOG - /127.0.0.1:51471 "PUT /predictions/mnist HTTP/1.1" 200 28
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                 aten::thnn_conv2d         0.10%      10.000us         4.79%     458.000us     458.000us             1  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667841674
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -        aten::_slow_conv2d_forward         3.88%     371.000us         4.68%     448.000us     448.000us             1  
2022-11-07T17:26:51,176 [DEBUG] W-9000-mnist_1.0 org.pytorch.serve.job.Job - Waiting time ns: 83125, Backend time ns: 28528959
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG -                       aten::zeros         1.37%     131.000us         3.61%     345.000us     115.000us             3  
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842011
2022-11-07T17:26:51,176 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:26:51,177 [INFO ] W-9000-mnist_1.0 TS_METRICS - WorkerThreadTime.ms:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842011
2022-11-07T17:26:51,177 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - Self CPU time total: 9.567ms
2022-11-07T17:26:51,177 [INFO ] W-9000-mnist_1.0-stdout MODEL_LOG - 
2022-11-07T17:26:51,177 [INFO ] W-9000-mnist_1.0-stdout MODEL_METRICS - HandlerTime.Milliseconds:22.95|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:8c8990c4-ace8-4831-982b-ff15f169ec25,timestamp:1667842011
2022-11-07T17:26:51,177 [INFO ] W-9000-mnist_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:23.02|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:8c8990c4-ace8-4831-982b-ff15f169ec25,timestamp:1667842011

With a Redis server:

2022-11-07T17:31:58,679 [INFO ] W-9003-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1667842318679
2022-11-07T17:31:58,680 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - Backend received inference at: 1667842318
2022-11-07T17:31:58,680 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - Saving chrome trace to : /tmp/pytorch_profiler/mnist
2022-11-07T17:31:58,690 [WARN ] W-9003-mnist_1.0-stderr MODEL_LOG - /Users/haoranpeng/mambaforge/lib/python3.10/site-packages/torch/nn/functional.py:1331: UserWarning: dropout2d: Received a 2-D input to dropout2d, which is deprecated and will result in an error in a future release. To retain the behavior and silence this warning, please use dropout instead. Note that dropout2d exists to provide channel-wise dropout on inputs with 2 spatial dimensions, a channel dimension, and an optional batch dimension (i.e. 3D or 4D inputs).
2022-11-07T17:31:58,691 [WARN ] W-9003-mnist_1.0-stderr MODEL_LOG -   warnings.warn(warn_msg)
2022-11-07T17:31:58,697 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:31:58,697 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                              Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg    # of Calls  
2022-11-07T17:31:58,697 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:31:58,697 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                        preprocess        73.96%       7.002ms        75.68%       7.165ms       7.165ms             1  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                         inference         4.01%     380.000us        23.19%       2.195ms       2.195ms             1  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                      aten::linear         0.14%      13.000us        12.47%       1.181ms     590.500us             2  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                       aten::addmm        12.01%       1.137ms        12.15%       1.150ms     575.000us             2  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                      aten::conv2d         0.12%      11.000us         4.70%     445.000us     222.500us             2  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                 aten::convolution         0.08%       8.000us         4.58%     434.000us     217.000us             2  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                aten::_convolution         1.45%     137.000us         4.50%     426.000us     213.000us             2  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                 aten::thnn_conv2d         0.04%       4.000us         3.02%     286.000us     286.000us             1  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -        aten::_slow_conv2d_forward         2.54%     240.000us         2.98%     282.000us     282.000us             1  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG -                  aten::max_pool2d         0.08%       8.000us         1.33%     126.000us     126.000us             1  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - ---------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  
2022-11-07T17:31:58,698 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - Self CPU time total: 9.467ms
2022-11-07T17:31:58,699 [INFO ] W-9003-mnist_1.0-stdout MODEL_LOG - 
2022-11-07T17:31:58,699 [INFO ] W-9003-mnist_1.0-stdout MODEL_METRICS - HandlerTime.Milliseconds:16.6|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:27a6fd8b-1254-45ac-8c93-5ac2f3db73a1,timestamp:1667842318
2022-11-07T17:31:58,699 [INFO ] W-9003-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 20
2022-11-07T17:31:58,699 [INFO ] W-9003-mnist_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:18.78|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:27a6fd8b-1254-45ac-8c93-5ac2f3db73a1,timestamp:1667842318
2022-11-07T17:31:58,700 [INFO ] W-9003-mnist_1.0 ACCESS_LOG - /127.0.0.1:52059 "PUT /predictions/mnist HTTP/1.1" 200 26
2022-11-07T17:31:58,700 [INFO ] W-9003-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318
2022-11-07T17:31:58,700 [DEBUG] W-9003-mnist_1.0 org.pytorch.serve.job.Job - Waiting time ns: 141000, Backend time ns: 21540459
2022-11-07T17:31:58,700 [INFO ] W-9003-mnist_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318
2022-11-07T17:31:58,700 [INFO ] W-9003-mnist_1.0 TS_METRICS - WorkerThreadTime.ms:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318
2022-11-07T17:31:58,721 [INFO ] W-9006-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1667842318721
2022-11-07T17:31:58,722 [INFO ] W-9006-mnist_1.0-stdout MODEL_LOG - Backend received inference at: 1667842318
2022-11-07T17:31:58,722 [INFO ] W-9006-mnist_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1
2022-11-07T17:31:58,722 [INFO ] W-9006-mnist_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:0.35|#ModelName:mnist,Level:Model|#hostname:haoranpeng-mbp,requestID:e1213fa0-7ef2-4977-8a7b-cc6d1aed7feb,timestamp:1667842318
2022-11-07T17:31:58,722 [INFO ] W-9006-mnist_1.0 ACCESS_LOG - /127.0.0.1:52060 "PUT /predictions/mnist HTTP/1.1" 200 2
2022-11-07T17:31:58,723 [INFO ] W-9006-mnist_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318
2022-11-07T17:31:58,723 [DEBUG] W-9006-mnist_1.0 org.pytorch.serve.job.Job - Waiting time ns: 67166, Backend time ns: 2034042
2022-11-07T17:31:58,723 [INFO ] W-9006-mnist_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318
2022-11-07T17:31:58,723 [INFO ] W-9006-mnist_1.0 TS_METRICS - WorkerThreadTime.ms:1|#Level:Host|#hostname:haoranpeng-mbp,timestamp:1667842318

redis-cli:

% redis-cli
127.0.0.1:6379> KEYS *
1) "\x80\x04\x95D\x01\x00\x00\x00\x00\x00\x00]\x94]\x94}\x94\x8c\x04body\x94\x8c\bbuiltins\x94\x8c\tbytearray\x94\x93\x94B\x10\x01\x00\x00\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x1c\x00\x00\x00\x1c\b\x00\x00\x00\x00Wf\x80H\x00\x00\x00\xd7IDATx\x9cc`\x18X`\xcb\xacUv\xb0\xac\xcc\x16\x8b\x14\xdf\xe6\xafo>\xfd\x03\x82\xaf\xafC0$\xa7\x03\xc5\xaf\x1e\xd8\xb4i\xcb\xbf\x7f\x1f\xf5\xd0\xe4\xb4_\xff{\xe4 \xc3\xc3\xc0\xc0\xd4\xf0\xe7\xffZATI\x8b\x7f\x7f\xb3\xa1\xcc\xb6_\xff\xbcQ%\xed\xff\xcd\x83\xb3\xef\xfe\x9b\x8b*y\xe8_:\x9c=\xed\xdfU\x149\xa5;\xef\xad\xe0\x9c\x104\xc9\xba\x7f\xab\x19pJ^{o\x83G\xf28\x03NI\xee\xdbx$\x93\xfe!K.\xfaw\x11\xa7\xa4\xf1;\xd4@@\x964^\xfa\xef0\x0b\xb2\xa4\xd3G\xb8$\xf3\xf2\x7f\x8f\xcdQ\x1c\xcbp\xed\xaa\b\x98\xd6\x9bq\xea\xdf?{\x064\xc9\x7fg\xb6\x80\xc0\xeb\x7f\xff^\xcd\xe1B\x93\x0c<\xfb\x0f\x02\xfe\xbc\xaa`\xc0\x00R\x97\xc0r330\xa5\xe8\n\x00\xc5ztB\xe8\xed?\xef\x00\x00\x00\x00IEND\xaeB`\x82\x94\x85\x94R\x94saa."

Cache evictions is manually tested, hard to show because of random eviction policy.

@msaroufim msaroufim requested review from HamidShojanazeri, lxning and maaquib and removed request for HamidShojanazeri, mreso, agunapal and lxning November 8, 2022 19:04
@msaroufim
Copy link
Member

Hi @GavinPHR can you please move all your code to the examples folder, for something to be in utils we're interested in a more generic cache solution. The PR looks good otherwise and it is approved but I won't merge it until after Nov 15 since we're doing a codefreeze but for the purposes of bootcamp we are good to go

@GavinPHR
Copy link
Contributor Author

@msaroufim Sure I just moved it.

@msaroufim
Copy link
Member

Gonna close this for now since we're looking at a native cache integration

@msaroufim msaroufim closed this Jul 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants