Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'compaction failed' - prometheus suddenly ate up entire disk #3487

Closed
gregorycerna opened this Issue Nov 17, 2017 · 25 comments

Comments

Projects
None yet
@gregorycerna
Copy link

gregorycerna commented Nov 17, 2017

What did you do?

Four or five days ago, I upgraded to Prometheus V2, running in a 4-node docker swarm

What did you expect to see?

Prometheus metrics data to grow fairly slowly, at roughly the same rate as with v1.8 (~1gb/month)

What did you see instead? Under which circumstances?

In the past 24 hours, the size of my prometheus data suddenly and inexplicably increased more than 1500x, from ~500mb to 771gb, completely filling up my disk.

Environment

I'm not sure what caused this, as I haven't modified any of prometheus's configs since I got v2 up and running smoothly. I'm running prometheus in a docker container in swarm mode, so my best guess is that something got corrupted when its container was killed and subsequently restarted on another host. Prometheus's data is being stored on an nfs share available to all hosts, which is then mounted to the container. When checking the data folder, the vast majority of folder in it are <randomhash>.tmp folders - the only other files besides the tmp folders are two folders with hashes for names (but no .tmp), along with a wal folder and a lock file.

  • System information:

    Linux 4.13.0-1-amd64 x86_64

  • Prometheus version:

/prometheus $ prometheus --version
prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
  build user:       root@615b82cb36b6
  build date:       20171108-07:11:59
  go version:       go1.9.2
  • Docker-compose configuration:
version: '3.3'
services:
  prom:
    image: prom/prometheus:v2.0.0
    volumes:
      - /docker/prometheus/config:/etc/prometheus
      - /docker/prometheus/data:/prometheus
    networks:
      - monitoring
    ports:
      - 9090:9090
  • Prometheus configuration file:
global:
  scrape_interval:     30s
  evaluation_interval: 30s

  external_labels:
    monitor: 'prometheus'

rule_files:
  - "alert.rules_nodes.yml"
  - "alert.rules_tasks.yml"
  - "alert.rules_service-groups.yml"

alerting:
  alertmanagers:
  - dns_sd_configs:
    - names:
      - 'alerts'
      type: 'A'
      port: 9093

scrape_configs:
- job_name: 'prometheus'
  dns_sd_configs:
  - names:
    - 'tasks.prom'
    type: 'A'
    port: 9090

- job_name: 'cadvisor'
  dns_sd_configs:
  - names:
    - 'tasks.cadvisor'
    type: 'A'
    port: 8080

- job_name: 'node-exporter'
  dns_sd_configs:
  - names:
    - 'tasks.node-exporter'
    type: 'A'
    port: 9100

- job_name: 'docker-exporter'
  static_configs:
  - targets:
    - 'node1:4999'
    - 'node2:4999'
    - 'node3:4999'
    - 'node4:4999'

- job_name: 'unifi-exporter'
  dns_sd_configs:
  - names:
    - 'tasks.unifi-exporter'
    type: 'A'
    port: 9130
  • Logs:
    (all logs retrieved using docker service logs monitor_prom)
    Logs on prometheus startup
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.384460037Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.384521845Z caller=main.go:216 build_context="(go=go1.9.2, user=root@615b82cb36b6, date=20171108-07:11:59)"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.384544054Z caller=main.go:217 host_details="(Linux 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 3434f87590e0 (none))"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.389948893Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.390175381Z caller=main.go:314 msg="Starting TSDB"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:01:46.40905656Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
monitor_prom.1.3u83u9j3hljv@node3    | level=warn ts=2017-11-16T23:02:34.148096492Z caller=head.go:317 component=tsdb msg="unknown series references in WAL samples" count=21956
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:02:34.181636971Z caller=main.go:326 msg="TSDB started"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:02:34.181753604Z caller=main.go:394 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:02:34.260471148Z caller=main.go:371 msg="Server is ready to receive requests."

an example of the countless compaction failed errors from before disk was filled

monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:27:04.730665346Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:27:05.400205406Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{__name__=\\\"go_gc_duration_seconds\\\",instance=\\\"10.0.0.244:9090\\\",job=\\\"prometheus\\\",quantile=\\\"0\\\"}\""
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:28:05.436936297Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:28:06.103396123Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{__name__=\\\"go_gc_duration_seconds\\\",instance=\\\"10.0.0.244:9090\\\",job=\\\"prometheus\\\",quantile=\\\"0\\\"}\""
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:29:06.135866736Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:29:06.827149013Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set \"{__name__=\\\"go_gc_duration_seconds\\\",instance=\\\"10.0.0.244:9090\\\",job=\\\"prometheus\\\",quantile=\\\"0\\\"}\""

compaction failed errors from after disk was filled

monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:55:18.555189787Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:55:18.555336942Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: mkdir /prometheus/01BZ3M464VD8YNKY27ZX8HKX2V.tmp: no space left on device"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:56:18.567555044Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:56:18.567783423Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: mkdir /prometheus/01BZ3M60R7BGV0TDYT9G4A3TRK.tmp: no space left on device"
monitor_prom.1.3u83u9j3hljv@node3    | level=info ts=2017-11-16T23:57:18.580361477Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1510704000000 maxt=1510711200000
monitor_prom.1.3u83u9j3hljv@node3    | level=error ts=2017-11-16T23:57:18.580538384Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: mkdir /prometheus/01BZ3M7VBM62PP3B0QAEBADCHK.tmp: no space left on device"
@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Nov 17, 2017

Thanks for the report, looking into it.

@fabxc My best guess here is that, we are trying to compact, but because of the out-of-order append, we are abandoning it mid-way without cleaning it up, filling the diskspace. A stopgap would be to clean up the compaction directory if compacting fails. I will look into it now.

Now why is there an out-of-order append in the compaction path is another question altogether.

@TimSimmons

This comment has been minimized.

Copy link

TimSimmons commented Nov 20, 2017

This happened to me today, data directory went from ~3GB to ~300GB.

These are the first lines related to compaction

level=error ts=2017-11-20T06:28:05.435890644Z caller=db.go:260 component=tsdb msg="compaction failed" err="persist head block: write compaction: add series: symbol entry for \"9109\" does not exist"

I see that log 586 times, and there were 588 block.tmp directories.

@vsakhart

This comment has been minimized.

Copy link

vsakhart commented Nov 21, 2017

Just wanted to comment that I'm also facing this issue

gouthamve added a commit to gouthamve/tsdb that referenced this issue Nov 21, 2017

Don't retry failed compactions.
Fixes prometheus/prometheus#3487

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>

gouthamve added a commit to gouthamve/tsdb that referenced this issue Nov 21, 2017

Don't retry failed compactions.
Fixes prometheus/prometheus#3487

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
@hectorag

This comment has been minimized.

Copy link

hectorag commented Nov 22, 2017

Hi, I'm also having this issue

Here the error log

msg="compaction failed" err="persist head block: write compaction: add series: out-of-order series added with label set "{name=\"container_fs_inodes_free

@yinchuan

This comment has been minimized.

Copy link

yinchuan commented Nov 23, 2017

Thanks for reporting.
I plan to upgrade Prometheus from 1.8.2 to 2.0, but this issue is so critical, I have to suspend it.
Waiting to be solved.

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Nov 23, 2017

Hi, thanks, I have been looking into this, but I am not able to reproduce this which is making things hard.

Would any of you be willing to ship your WAL directory to us? That would help reproduce this and will make things much easier.

Further, is this happening after a restart or crash of prometheus? @hectorag @vsakhart @TimSimmons

gouthamve added a commit to gouthamve/tsdb that referenced this issue Nov 23, 2017

Don't retry failed compactions.
Fixes prometheus/prometheus#3487

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Nov 23, 2017

@hectorag Thanks! Downloaded, will let you know how this goes.

Also, feel free to hop onto prometheus IRC channel next time you want to send an ephemeral message :)

@FUSAKLA

This comment has been minimized.

Copy link
Contributor

FUSAKLA commented Nov 28, 2017

Is there any progress on this issue?
I'd like to upgrade on 2.0 but this issue sounds like blocker for production use.

@homelessnessbo

This comment has been minimized.

Copy link

homelessnessbo commented Nov 28, 2017

Same here. I was about to upgrade to 2.0 when I saw it, any news?

@frafranck

This comment has been minimized.

Copy link

frafranck commented Nov 28, 2017

I got the same problem, hope it can be resolv soon. the problem is not just on DISK SPACE, it increase head chunk count, length of block duration and memory usage i was around 3 Go RAM used and prometheus increase linearly to 7Go (in few hours) and more ...

For information, i did those things to get back a good running prometheus without errors :

  • I delete all series which generate this error, in my case i had to delete olds series from olds cadvisor jobs
    Did a POST on : http://prometheus/api/v2/admin/tsdb/delete_series
    with body like this :
    { "matchers": [{ "name": "job", "value" :"cadvisor" }] }
    result : i removed all data about those jobs, but others scraping data are OK.

  • Last thing : in my data BDD prometheus, there are many .tmp, i removed all tmp directory

After prometheus getting better and no errors logs

@hectorag

This comment has been minimized.

Copy link

hectorag commented Nov 28, 2017

I also have plenty of this .tmp directories. There is any reason why prometheus is generating all these directories? It's normal? It's good to remove them as @frafranck proposed?

@TimSimmons

This comment has been minimized.

Copy link

TimSimmons commented Nov 28, 2017

I removed all the *.tmp directories with no adverse effects, it should be safe. They seem to be leftovers of failed compaction.

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Nov 28, 2017

Sorry for taking too long on this. While I could see what was going wrong, I was not able to figure out why, maybe because it was already fixed upstream. Full description below:

So heres what was happening, two series were ending up with the same seriesID in the WAL which was causing issues. Now two series can never have the same seriesID because we atomically increase the seriesID. When reading the WAL, we store the highest seriesID we saw in the WAL and then increment it for any new series we see. This ensures why the seriesID can never be the same for two different series.

Now when I was looking at the data (thanks @hectorag), I could see that some seriesIDs which were at the end of a segment near a restart were also occuring in the beginning of the next segment. Essentially, the Prometheus server somehow mysteriously never the saw the series in WAL at the end, else the next seriesID's would be higher. But on a later restart, it saw them, causing two series to have the same ID.

Now this was the culprit (rather the fix): prometheus/tsdb#204 In 2.0, this change was not included which meant that sometimes when we restart, there is still some data in the linux page-buffers which was not flushed to disk but was flushed later. This meant, on the immediate restart, we never read that data!

I have a test that reproduces this behaviour if that change is reverted. While this fixes the case of a clean shutdown, it might still be an issue during crashes. Will have a PR with the fix out early tomorrow. Thanks for your patience!

Also, yes, you can happily delete the .tmp folders.

gouthamve added a commit to gouthamve/tsdb that referenced this issue Nov 30, 2017

Fdatasync on read to flush any unflushed data.
This is to handle partial writes from a previous crash.

Fixes prometheus/prometheus#3487

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>

gouthamve added a commit to gouthamve/tsdb that referenced this issue Nov 30, 2017

Fdatasync on read to flush any unflushed data.
This is to handle partial writes from a previous crash.

Fixes prometheus/prometheus#3487

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
@zegl

This comment has been minimized.

Copy link

zegl commented Jan 12, 2018

I just got to experience this issue. Are there any plans for a bugfix release to 2.0 any time soon?

@krasi-georgiev

This comment has been minimized.

Copy link
Member

krasi-georgiev commented Jan 12, 2018

@zegl I think 2.1 is coming in the next 1-2 weeks.

@zegl

This comment has been minimized.

Copy link

zegl commented Jan 12, 2018

@krasi-georgiev Thanks for the reply.


Deleting all *.tmp files doesn't seem to permanently solve the issue in our case.

Soon after Prometheus has started up again, new tmp-folders will be created.

This is the full log from Prometheus since the cleanup of all tmp folders.

-- Logs begin at Thu 2018-01-11 06:15:57 CET. --
Jan 12 15:01:51 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:01:51.443055089Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
Jan 12 15:01:51 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:01:51.443072059Z caller=main.go:314 msg="Starting TSDB"
Jan 12 15:01:51 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:01:51.444155629Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
Jan 12 15:03:31 prometheusfederate-0 prometheus[7101]: level=error ts=2018-01-12T14:03:31.871733829Z caller=wal.go:275 component=tsdb msg="WAL corruption detected; truncating" err="unexpected CRC32 checksum b3c16ec5, want 0" file=/prometheus-data/data/wal/013326 pos=183825984
Jan 12 15:03:32 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:03:32.642561078Z caller=main.go:326 msg="TSDB started"
Jan 12 15:03:32 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:03:32.64262694Z caller=main.go:394 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 12 15:03:32 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:03:32.655687505Z caller=main.go:371 msg="Server is ready to receive requests."
Jan 12 15:04:32 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:04:32.689718825Z caller=compact.go:361 component=tsdb msg="compact blocks" count=3 mint=1515153600000 maxt=1515736800000
Jan 12 15:16:01 prometheusfederate-0 prometheus[7101]: level=error ts=2018-01-12T14:16:01.260962884Z caller=db.go:260 component=tsdb msg="compaction failed" err="compact [/prometheus-data/data/01C3902Q39E6RMFMXDH539JZXR /prometheus-data/data/01C3ESGSZ5W120PDTN16QX2RZT /prometheus-data/data/01C3MJX84ZVN77NGS16S1Q8F90]: write compaction: write postings: write postings: exceeding max size of 4GiB"
Jan 12 15:17:03 prometheusfederate-0 prometheus[7101]: level=info ts=2018-01-12T14:17:03.391396245Z caller=compact.go:361 component=tsdb msg="compact blocks" count=3 mint=1515153600000 maxt=1515736800000

One line from the boot procedure is especially concerning:

Jan 12 15:03:31 prometheusfederate-0 prometheus[7101]: level=error ts=2018-01-12T14:03:31.871733829Z caller=wal.go:275 component=tsdb msg="WAL corruption detected; truncating" err="unexpected CRC32 checksum b3c16ec5, want 0" file=/prometheus-data/data/wal/013326 pos=183825984

Is this error the root cause? Is it safe to delete the wal folder?


This is what the fs looked like before the start of Prometheus:

total 52K
drwxr-x--- 12 prometheus prometheus 4.0K Jan 12 15:01 .
drwxr-xr-x  4 root       root       4.0K Dec  4 16:06 ..
drwxr-xr-x  3 prometheus prometheus 4.0K Dec  9 16:09 01C0XWPJSXAR6BGVBV0JT36KPW
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 16 08:16 01C1F29PSEQWTAK21A9WXKQAM7
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 23 02:17 01C20EG93WYAYEF55PPF6VAA58
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 29 20:17 01C2HTNT2KB8BS389HSVYTG86C
drwxr-xr-x  3 prometheus prometheus 4.0K Jan  5 14:18 01C336WEE5ZNZMXAQ2PZ2YGNZT
drwxr-xr-x  3 prometheus prometheus 4.0K Jan  7 20:06 01C3902Q39E6RMFMXDH539JZXR
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 10 02:08 01C3ESGSZ5W120PDTN16QX2RZT
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 08:07 01C3MJX84ZVN77NGS16S1Q8F90
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 14:02 01C3N7BEQYD3YNEGWMDKYTHR3Z
-rw-------  1 prometheus prometheus    5 Jan 12 13:57 lock
drwxr-xr-x  2 prometheus prometheus 4.0K Jan 12 14:57 wal

And this is what it looks like half an hour later:

total 64K
drwxr-x--- 15 prometheus prometheus 4.0K Jan 12 15:29 .
drwxr-xr-x  4 root       root       4.0K Dec  4 16:06 ..
drwxr-xr-x  3 prometheus prometheus 4.0K Dec  9 16:09 01C0XWPJSXAR6BGVBV0JT36KPW
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 16 08:16 01C1F29PSEQWTAK21A9WXKQAM7
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 23 02:17 01C20EG93WYAYEF55PPF6VAA58
drwxr-xr-x  3 prometheus prometheus 4.0K Dec 29 20:17 01C2HTNT2KB8BS389HSVYTG86C
drwxr-xr-x  3 prometheus prometheus 4.0K Jan  5 14:18 01C336WEE5ZNZMXAQ2PZ2YGNZT
drwxr-xr-x  3 prometheus prometheus 4.0K Jan  7 20:06 01C3902Q39E6RMFMXDH539JZXR
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 10 02:08 01C3ESGSZ5W120PDTN16QX2RZT
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 08:07 01C3MJX84ZVN77NGS16S1Q8F90
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 14:02 01C3N7BEQYD3YNEGWMDKYTHR3Z
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 15:04 01C3NAZE1HPDZD762AQJSHKFMT.tmp
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 15:17 01C3NBPB4YPDGZ1GTT77DW3X2M.tmp
drwxr-xr-x  3 prometheus prometheus 4.0K Jan 12 15:29 01C3NCCXCJXCCCQ93TFMBZKCB9.tmp
-rw-------  1 prometheus prometheus    5 Jan 12 15:01 lock
drwxr-xr-x  2 prometheus prometheus 4.0K Jan 12 15:32 wal

Currently the tmp files are filled at a rate of ~ 300GB / hour.


Is there anything we can do to more permanently resolve this issue?


Update: Setting --storage.tsdb.min-block-duration=30m and --storage.tsdb.max-block-duration=1d seems to have worked as a temporary workaround. No leftower tmp-dirs has been created after 2 hours of uptime.

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Jan 12, 2018

Looks like you're hitting #3190 which depends on prometheus/tsdb#238 and isn't solved yet.

write compaction: write postings: write postings: exceeding max size of 4GiB"

hackmad pushed a commit to LoyaltyOne/kafka-infra that referenced this issue Apr 5, 2018

@var23rav

This comment has been minimized.

Copy link

var23rav commented Apr 19, 2018

Am facing the same issue while reloading prometheus(forced closed and restart from cmd prompt(windows) or using the HTTP POST request http:localost:9090/-/reload).

Note: During prometheus restart, meta.json is getting deleted automatically, from ./data/*/ folder.

Any fix on this !

@krasi-georgiev

This comment has been minimized.

Copy link
Member

krasi-georgiev commented Apr 19, 2018

@krasi-georgiev

This comment has been minimized.

Copy link
Member

krasi-georgiev commented Apr 22, 2018

is it writing to local disk or NFS?

@var23rav

This comment has been minimized.

Copy link

var23rav commented Apr 27, 2018

@krasi-georgiev Sorry for the late reply. I were using the old version(2.0.0).
Moved to Prometheus 2.2.1. Its fine. (Till the point no issue).

@strowi

This comment has been minimized.

Copy link

strowi commented May 4, 2018

Hi,

seems i am experiencing the same issue with 2.2.1 writing on nfs

@krasi-georgiev

This comment has been minimized.

Copy link
Member

krasi-georgiev commented May 5, 2018

I can't remember was what the issue, but remember nfs is misbehaving on all versions.

@strowi

This comment has been minimized.

Copy link

strowi commented May 8, 2018

Yes, here too. Moved to 'emptyDir' and everything seems to be working fine for now. If needed i can provide more info..

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.