bytes_read unexpected behavior #1127
Replies: 3 comments 1 reply
-
Hi,
|
Beta Was this translation helpful? Give feedback.
-
I think our source code doesn't play any role. Before the changes, it looked something like this: public function getFromCache($key){
return $this->objMemcached->get($key);
} Now it's: public function getFromCache($key){
return self::$cached[$key] ??= $this->obMemcached->get($key);
} Of course, this is a simplification, just to explain a little more about how we reduce connections to Memcached. It allowed us to reduce calls from 800 to 300 per HTTP request. But that's not the main point. Take a look at these two screens from Grafana: ![]() ![]() The first one shows the Grafana view for Memcached and illustrates read/write activity. The JSON for this data looks like this: {
"id": 19,
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 13
},
"type": "graph",
"title": "Read/Write",
"datasource": {
"type": "prometheus",
"uid": "000000009"
},
"thresholds": [],
"pluginVersion": "9.2.1",
"links": [],
"legend": {
"alignAsTable": true,
"avg": false,
"current": true,
"max": true,
"min": true,
"rightSide": false,
"show": true,
"total": false,
"values": true
},
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fieldConfig": {
"defaults": {
"links": []
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"hiddenSeries": false,
"lines": true,
"linewidth": 2,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "000000009"
},
"expr": "delta(memcached_read_bytes_total{instance=\"$node\"}[1m])",
"format": "time_series",
"interval": "1m",
"intervalFactor": 1,
"legendFormat": "read",
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "000000009"
},
"expr": "delta(memcached_written_bytes_total{instance=\"$node\"}[1m])",
"format": "time_series",
"interval": "1m",
"intervalFactor": 1,
"legendFormat": "write",
"refId": "B"
}
],
"timeRegions": [],
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"xaxis": {
"mode": "time",
"show": true,
"values": [],
"name": null,
"buckets": null
},
"yaxes": [
{
"format": "bytes",
"logBase": 1,
"show": true
},
{
"format": "short",
"logBase": 1,
"show": true
}
],
"yaxis": {
"align": false
},
"timeFrom": null,
"timeShift": null
} The data is served from Prometheus. In the documentation, The second screenshot shows network traffic from one application node to Memcached (yellow and red plots, eth1 interface). We have 20 nodes with similiar traffic characteristic. What's confusing is that when traffic decreases, I noticed that our Memcached version is 1.4.15. So, we are planning to update to a newer version (1.6.23-1). |
Beta Was this translation helpful? Give feedback.
-
I got it. I figured it out. Prometheus reads the bytes_read/bytes_written statistics. But on the graph, it shows: delta(memcached_read_bytes_total{instance="$node"}[1m]) So when incoming transfer decreases, delta also decreases, hence the graph read/write also decreases. It's so easy...
Yeah! I know that. We'll start regression tests on newer version on monday.
This is some inefficiency in the code. At some point, in one of the php-cli fork something is ending. Maybe the parent process dies because it reaches the memory limit, maybe the key being fetched by the processes grows over time until it is unset because, for example, it expires, maybe some array in php is being zeroed out because the limit is reached, or count > x, maybe the number of keys being fetched in some process is increasing and eventually those keys are not being selected for some reason, maybe a cached fault is being triggered and then there is a breakthrough. Notice that spontaneous cleaning occurs at a certain level of transfer. It clearly looks like something is odd and flying between php<> memcached incorrectly. We'll check it Monday. Thanks @dormando for your assistance. |
Beta Was this translation helpful? Give feedback.
-
Hello,
We've encountered some unusual behavior in our memcached instance.
After implementing a minor code adjustment to prevent repetitive requests for the same key from memcached, we anticipated a decrease in the number of 'get' commands. As expected, the count did decrease. However, unexpectedly, we noticed a simultaneous rise in the read/write count. We're unable to explain this phenomenon.
Furthermore, periodically, the read/write count resets to 0.
Could you provide more information on when the 'bytes_read' and 'bytes_write' counts are reset?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions