Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman compose it not working on some projects on macOS #19852

Closed
benoitf opened this issue Sep 4, 2023 · 34 comments · Fixed by #22290
Closed

podman compose it not working on some projects on macOS #19852

benoitf opened this issue Sep 4, 2023 · 34 comments · Fixed by #22290
Labels
kind/bug Categorizes issue or PR as related to a bug. podman-desktop stale-issue

Comments

@benoitf
Copy link
Contributor

benoitf commented Sep 4, 2023

Issue Description

I tried to run https://github.com/change-metrics/monocle#installation using podman compose

but I'm not able to make it work

issue is around volumes

There is a :z flag preventing the startup but even removing the :z flag we have then the error

monocle-elastic-1  | ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
monocle-elastic-1  | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:397)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createDirectory(Files.java:700)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:807)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createDirectories(Files.java:793)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:300)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:224)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:298)
monocle-elastic-1  | 	at org.elasticsearch.node.Node.<init>(Node.java:429)
monocle-elastic-1  | 	at org.elasticsearch.node.Node.<init>(Node.java:309)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
monocle-elastic-1  | 	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
monocle-elastic-1  | 	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
monocle-elastic-1  | 	at org.elasticsearch.cli.Command.main(Command.java:77)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
monocle-elastic-1  | For complete error details, refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log

in the elastic search container

Steps to reproduce the issue

Steps to reproduce the issue

  1. git clone https://github.com/change-metrics/monocle
  2. cd monocole
  3. echo CRAWLERS_API_KEY=$(uuidgen) > .secrets
  4. podman compose up

Describe the results you received

failure

~/git/containers/podman/bin/darwin/podman  compose up
>>>> Executing external compose provider "/opt/homebrew/bin/docker-compose". Please refer to the documentation for details. <<<<

[+] Running 3/0
 ✔ Container monocle-elastic-1  Recreated                                                                                                                                               0.0s
 ✔ Container monocle-api-1      Recreated                                                                                                                                               0.0s
 ✔ Container monocle-crawler-1  Recreated                                                                                                                                               0.0s
Attaching to monocle-api-1, monocle-crawler-1, monocle-elastic-1
Error response from daemon: lsetxattr /Users/benoitf/git/change-metrics/monocle/data: operation not supported
Error: executing /opt/homebrew/bin/docker-compose up: exit status 1

volumes are using :z suffix and it doesn't work

     volumes:
      - "./etc:/etc/monocle:z"

Describe the results you expected

should work

podman info output

If you are unable to run podman info for any reason, please provide the podman version, operating system and its version and the architecture you are running.

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@benoitf benoitf added kind/bug Categorizes issue or PR as related to a bug. podman-desktop labels Sep 4, 2023
@vrothberg
Copy link
Member

Also needs a echo CRAWLERS_API_KEY=$(uuidgen) > .secrets for the reproducer.

@benoitf
Copy link
Contributor Author

benoitf commented Sep 4, 2023

ah yes forgot to report this step 👍 thanks @vrothberg

@vrothberg
Copy link
Member

     volumes:
      - "./etc:/etc/monocle:z"

The :z is the culprit. SELinux relabeling does not work with podman-machine at the present (see #13631).

At the moment, the user experience isn't where I'd like it to be. I find the error hard to read and the issue isn't documented. Since we know that SELinux relabeling doesn't work with podman-machine I think there's a number of options:

  • Return a helpful error when attempting to relabel
  • Ignore the relabel request when running on Plan 9

One way or another, it should be documented.

@vrothberg
Copy link
Member

@rhatdan @ashley-cui @giuseppe WDYT?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 4, 2023

@vrothberg but if we drop the :z then it does not work as well so 🤷

@vrothberg
Copy link
Member

@vrothberg but if we drop the :z then it does not work as well so 🤷

Did you remove all :z in the compose YAML? Doing that does the trick on my machine.

@benoitf
Copy link
Contributor Author

benoitf commented Sep 4, 2023

@vrothberg yes, removed the two :z and one :Z

then it's starting but got

monocle-elastic-1  | ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
monocle-elastic-1  | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes

so it's still not working

@vrothberg
Copy link
Member

@benoitf can you share the full output? Since it works on my machine, I am a bit puzzled. Does restarting the machine help?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

@vrothberg are you testing on macOS ?

yes I tried deleting/recreating a machine

the diff on the docker-compose.yml file

diff --git a/docker-compose.yml b/docker-compose.yml
index b505ac82..17df210f 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -20,7 +20,7 @@ services:
       - "${COMPOSE_MONOCLE_API_ADDR:-0.0.0.0}:${COMPOSE_MONOCLE_API_PORT:-8080}:8080"
     restart: unless-stopped
     volumes:
-      - "./etc:/etc/monocle:z"
+      - "./etc:/etc/monocle"
   crawler:
     command: monocle crawler
     depends_on:
@@ -36,7 +36,7 @@ services:
     image: "quay.io/change-metrics/monocle:${COMPOSE_MONOCLE_VERSION:-1.8.0}"
     restart: unless-stopped
     volumes:
-      - "./etc:/etc/monocle:z"
+      - "./etc:/etc/monocle"
   elastic:
     environment:
       ES_JAVA_OPTS: "-Xms${COMPOSE_ES_XMS:-512m} -Xmx${COMPOSE_ES_XMX:-512m}"
@@ -54,5 +54,5 @@ services:
         hard: 65535
         soft: 65535
     volumes:
-      - "./data:/usr/share/elasticsearch/data:Z"
+      - "./data:/usr/share/elasticsearch/data"
 version: '3'

I'm using Docker Compose version 2.21.0

 ~/git/containers/podman/bin/darwin/podman  compose version                                                                                                ✔  >>>> Executing external compose provider "/opt/homebrew/bin/docker-compose". Please refer to the documentation for details. <<<<
Docker Compose version 2.21.0

full log:

~/git/containers/podman/bin/darwin/podman  compose up
>>>> Executing external compose provider "/opt/homebrew/bin/docker-compose". Please refer to the documentation for details. <<<<

[+] Running 3/0
 ✔ Container monocle-elastic-1  Created                                                                                                                                                                            0.0s
 ✔ Container monocle-api-1      Recreated                                                                                                                                                                          0.0s
 ✔ Container monocle-crawler-1  Recreated                                                                                                                                                                          0.0s
Attaching to monocle-api-1, monocle-crawler-1, monocle-elastic-1
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 137
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:44:59,625Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "version[7.17.5], pid[2], build[default/docker/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/6.4.11-200.fc38.aarch64/aarch64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/18.0.1.1/18.0.1.1+2-6]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:44:59,643Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:44:59,645Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-5364747705076657440, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -XX:MaxDirectMemorySize=268435456, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,299Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [aggs-matrix-stats]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,301Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [analysis-common]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [constant-keyword]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,303Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [frozen-indices]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,304Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [ingest-common]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,311Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [ingest-geoip]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,312Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [ingest-user-agent]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,313Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [kibana]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,313Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [lang-expression]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,313Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [lang-mustache]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,324Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [lang-painless]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,324Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [legacy-geo]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,324Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [mapper-extras]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,325Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [mapper-version]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,325Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [parent-join]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,325Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [percolator]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,326Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [rank-eval]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,326Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [reindex]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,326Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [repositories-metering-api]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,328Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [repository-encrypted]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,328Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [repository-url]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,328Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [runtime-fields-common]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,329Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [search-business-rules]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,329Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [searchable-snapshots]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,330Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [snapshot-repo-test-kit]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,330Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [spatial]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,331Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [transform]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,333Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [transport-netty4]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,334Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [unsigned-long]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,337Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [vector-tile]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,337Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [vectors]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,337Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [wildcard]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,337Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-aggregate-metric]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,337Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-analytics]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,338Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-async]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,343Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-async-search]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,343Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-autoscaling]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,344Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-ccr]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,344Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-core]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,344Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-data-streams]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,346Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-deprecation]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,346Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-enrich]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,348Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-eql]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,348Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-fleet]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,348Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-graph]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,348Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-identity-provider]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,349Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-ilm]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,354Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-logstash]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,354Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-ml]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,354Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-monitoring]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,358Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-ql]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,358Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-rollup]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,360Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-security]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,360Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-shutdown]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,360Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-sql]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,360Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-stack]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,362Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-text-structure]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,362Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-voting-only-node]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,364Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "loaded module [x-pack-watcher]" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,365Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "no plugins loaded" }
monocle-elastic-1  | {"type": "server", "timestamp": "2023-09-05T07:45:11,586Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "c682b8c3b2c6", "message": "uncaught exception in thread [main]",
monocle-elastic-1  | "stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:173) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
monocle-elastic-1  | "at org.elasticsearch.node.Node.<init>(Node.java:1088) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.node.Node.<init>(Node.java:309) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "... 6 more",
monocle-elastic-1  | "Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes",
monocle-elastic-1  | "at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
monocle-elastic-1  | "at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) ~[?:?]",
monocle-elastic-1  | "at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
monocle-elastic-1  | "at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:397) ~[?:?]",
monocle-elastic-1  | "at java.nio.file.Files.createDirectory(Files.java:700) ~[?:?]",
monocle-elastic-1  | "at java.nio.file.Files.createAndCheckIsDirectory(Files.java:807) ~[?:?]",
monocle-elastic-1  | "at java.nio.file.Files.createDirectories(Files.java:793) ~[?:?]",
monocle-elastic-1  | "at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:300) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:224) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:298) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.node.Node.<init>(Node.java:429) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.node.Node.<init>(Node.java:309) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.5.jar:7.17.5]",
monocle-elastic-1  | "... 6 more"] }
monocle-elastic-1  | uncaught exception in thread [main]
monocle-elastic-1  | ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
monocle-elastic-1  | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:397)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createDirectory(Files.java:700)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:807)
monocle-elastic-1  | 	at java.base/java.nio.file.Files.createDirectories(Files.java:793)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:300)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:224)
monocle-elastic-1  | 	at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:298)
monocle-elastic-1  | 	at org.elasticsearch.node.Node.<init>(Node.java:429)
monocle-elastic-1  | 	at org.elasticsearch.node.Node.<init>(Node.java:309)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
monocle-elastic-1  | 	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
monocle-elastic-1  | 	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
monocle-elastic-1  | 	at org.elasticsearch.cli.Command.main(Command.java:77)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
monocle-elastic-1  | 	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
monocle-elastic-1  | For complete error details, refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log
monocle-api-1 exited with code 0
monocle-elastic-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0
monocle-api-1 exited with code 0
monocle-crawler-1 exited with code 0

@vrothberg
Copy link
Member

@vrothberg are you testing on macOS ?

Yes.

@vrothberg
Copy link
Member

I can reproduce as well. Needs to run for a while. It works with Docker.

But I have honestly no idea why. The error message leaves some space for interpretation. @ashley-cui @rhatdan are there further limitations of the plan9 mounts?

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

ah yes @vrothberg it requires some time to let the application starts

@benoitf
Copy link
Contributor Author

benoitf commented Sep 5, 2023

@vrothberg also on your previous question, I think it would make sense to ignore :z suffix and report a warning or a trace but still continue.

if virtiofs + Apple hypervisor solve the issue it would be great as well to switch to those

@vrothberg
Copy link
Member

vrothberg commented Sep 5, 2023

@vrothberg also on your previous question, I think it would make sense to ignore :z suffix and report a warning or a trace but still continue.

@rhatdan @ashley-cui @n1hility WDYT? I think this may help resolve some issues wrt. volume mounts on qemu+plan9

@baude
Copy link
Member

baude commented Sep 5, 2023

i agree ^^

@n1hility
Copy link
Member

n1hility commented Sep 5, 2023

same opinion: this is a common problem we have seen so dropping it makes a ton of sense

@vrothberg
Copy link
Member

Thanks! @n1hility, does labeling work on Windows with WSL?

@rhatdan
Copy link
Member

rhatdan commented Sep 5, 2023

I am fine with changing :Z, :z relabels to warn if error is ENOSUP

@rhatdan
Copy link
Member

rhatdan commented Sep 5, 2023

@vrothberg and I discussed issues around chown, which I believe are fixed by moving to apple hypervisor.

Bottom line if a container or VM do a chown on a file to a UID different then the MAC user, the MAC file system is not going to allow this. If the remote file system supports the CHOWN via Xattr support then it could work. Basically virtiofs would set an XATTR on the MAC that tells virtiofsd inside of the VM to show the file ownership based on the XAttr rather then the Ownership of the file on the Mac.

@n1hility
Copy link
Member

n1hility commented Sep 5, 2023

@vrothberg right it’s the same issue as on mac, 9 p doesn’t support it so will also fail if you mount something on windows (most cases). Although it will work if your mount something local to the WSL instance on its ext4 volume.

@n1hility
Copy link
Member

n1hility commented Sep 5, 2023

@vrothberg and I discussed issues around chown, which I believe are fixed by moving to apple hypervisor.

Bottom line if a container or VM do a chown on a file to a UID different then the MAC user, the MAC file system is not going to allow this. If the remote file system supports the CHOWN via Xattr support then it could work. Basically virtiofs would set an XATTR on the MAC that tells virtiofsd inside of the VM to show the file ownership based on the XAttr rather then the Ownership of the file on the Mac.

@rhatdan IIUC that might be problematic. The issue I am thinking of is that any files not created by the VM (for example a directory inside someone's Mac' home dir) will not have the xattr so will probably get permission issues if the uid doesn't match.

@n1hility
Copy link
Member

n1hility commented Sep 5, 2023

@vrothberg and I discussed issues around chown, which I believe are fixed by moving to apple hypervisor.
Bottom line if a container or VM do a chown on a file to a UID different then the MAC user, the MAC file system is not going to allow this. If the remote file system supports the CHOWN via Xattr support then it could work. Basically virtiofs would set an XATTR on the MAC that tells virtiofsd inside of the VM to show the file ownership based on the XAttr rather then the Ownership of the file on the Mac.

@rhatdan IIUC that might be problematic. The issue I am thinking of is that any files not created by the VM (for example a directory inside someone's Mac' home dir) will not have the xattr so will probably get permission issues if the uid doesn't match.

Related: we might be limited on what we can do on the applehv side. Their docs make it look like they only support virtiofsd with a passthrough configuration:

https://developer.apple.com/documentation/virtualization/vzvirtiofilesystemdevice

- The framework reads and writes files using the user ID (UID) of the effective user, which is the UID of the current user, rather than the UID of the system process.

- The framework doesn’t allow reading or overwriting of files with permissions where the file is inaccessible to the current user.

- The framework ignores requests from guest operating systems to change the UID or group ID (GID) of files on the host.

@rhatdan
Copy link
Member

rhatdan commented Sep 14, 2023

If you are getting permission denied and have a reproducer, could you paste the AVCs?

podman machine ssh ausearch -m avc

@benoitf
Copy link
Contributor Author

benoitf commented Sep 14, 2023

@rhatdan

so there is no command ausearch in the podman machine by default

 podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Fedora CoreOS 38.20230902.2.1
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[core@localhost ~]$ ausearch -m avc
-bash: ausearch: command not found

I did install it with

rpm-ostree install audit

then restarted the machine

I switched from rootless to rootful as well

but still got

monocle-elastic-1  | uncaught exception in thread [main]
monocle-elastic-1  | ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
monocle-elastic-1  | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
monocle-elastic-1  | 	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)

and the aumsg command is reporting

[root@localhost ~]#  ausearch -m avc
<no matches>

if I connect to the container being launched

podman exec  -it f5b907119ab8 ls -la  /usr/share/elasticsearch                                                                                       INT ✘
total 660
drwxrwxr-x.  1 root          root        46 Sep 14 14:54 .
drwxr-xr-x.  1 root          root        27 Jun 24  2022 ..
-rw-r--r--.  1 root          root       220 Jun 24  2022 .bash_logout
-rw-r--r--.  1 root          root      3771 Jun 24  2022 .bashrc
drwxrwxr-x.  3 elasticsearch root        17 Sep 14 14:54 .cache
-rw-r--r--.  1 root          root       807 Jun 24  2022 .profile
-r--r--r--.  1 root          root      3860 Jun 23  2022 LICENSE.txt
-r--r--r--.  1 root          root    640930 Jun 23  2022 NOTICE.txt
-r--r--r--.  1 root          root      2710 Jun 23  2022 README.asciidoc
drwxrwxr-x.  1 elasticsearch root         6 Jun 24  2022 bin
drwxrwxr-x.  1 elasticsearch root        36 Sep 14 14:54 config
drwxr-xr-x.  3           501 dialout     96 Sep  5 07:51 data
dr-xr-xr-x.  1 root          root        17 Jun 23  2022 jdk
dr-xr-xr-x.  3 root          root      4096 Jun 23  2022 lib
drwxrwxr-x.  1 elasticsearch root      4096 Sep 14 14:58 logs
dr-xr-xr-x. 61 root          root      4096 Jun 23  2022 modules
drwxrwxr-x.  1 elasticsearch root         6 Jun 23  2022 plugins

we see that the data folder being mounted is having the uid/gid

drwxr-xr-x.  3           501 dialout     96 Sep  5 07:51 data

501/dialout which is my host id/gid uid=501(benoitf) gid=20(staff) on macOS

so it looks a uid/gid issues on volumes

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@maxandersen
Copy link

related: #19852

@maxandersen
Copy link

it would be really nice if we could find a way to make the issue of Z work portably.

on linux we must use Z as selinux otherwise prevents mounting the volume but on mac (and also windows?) it will fail.

Meaning when running i.e. jekyll serving wiht podman you'll need to handle this differently per OS but on docker it "just works"

This is also why in Quarkus devservices we have code to handle Z/z as otherwise launch of containers with volumes will fail for same reasons.

What im saying is that anyone wanting to mount data from home needs conditional logic to work with podman where other "just works".

could we not somehow make Z a no-op on OS's that does not support/does not get affected by it?

@n1hility
Copy link
Member

n1hility commented Apr 5, 2024

@maxandersen right thats what what this issue is about

@benoitf
Copy link
Contributor Author

benoitf commented Apr 5, 2024

AFAIK on podman5 macOS/applehv I think now you can use z/Z

@maxandersen
Copy link

@benoitf 5.0.1 seem to have same issue? something one need to enable to have it work?

@benoitf
Copy link
Contributor Author

benoitf commented Apr 5, 2024

it's not working as expected with 5.0.1 (like the permissions are not the expected one) or you have a failure when starting it ?

for example with v4 you had to remove the flag as we were always getting operation not supported

@maxandersen
Copy link

@benoitf @n1hility replicated it and was the one suggesting the idea of dropping z/Z. 2 other had issue.
I haven't yet updated to 5.0.1 as trying to test current default podman desktop.

I just added above comment as that didn't seem mentioned on the issue.

@n1hility
Copy link
Member

n1hility commented Apr 5, 2024

right yeah looking into where its coming from

@n1hility
Copy link
Member

n1hility commented Apr 5, 2024

Looks like the reason is 5 (virtiofs on applehv) is returning EACCESS not ENOSUP). Will need to think about this one, as it could be a legit failure, whereas ENOSUP is black and white.

[pid 2028] <... lsetxattr resumed>) = -1 EACCES (Permission denied)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. podman-desktop stale-issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants