-
|
Hi everyone, I'm facing an issue with the MinIO/S3 integration or configuration when running a pipeline remotely on the Hop Server (Docker container). The same pipeline works perfectly when executed by using the local pipeline engine (via Hop GUI or Hop Web Docker container), but it fails to read files or their content from the bucket when executed via remote pipeline engine on the server (submitted in Hop GUI/Web). Details
MinIO Log - local pipeline engine (shortened)
MinIO Log - remote pipeline engine (shortened)
Alternative attempt with S3 configuration via environment variablesMade this collapsible, because it's more of a follow-up problemAs I understand, it should also be possible to use Hops S3 implementation to access Minio (please correct me if I'm wrong) or atleast another S3 object store. So I tried configuring the connection for that.
Additional attempts
Conclusion
I’ve tried all options I could find in the documentation/code and online or was suggested by AI tools, but haven’t managed to resolve the issue. Any help or pointers would be greatly appreciated! Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
|
Update: I would guess, that this behaviour is a bug, because for example metadata for a database connection works as expected. I configure the connection information in the UI and execute the pipeline remotely on the server. In this case the server just uses the metadata, which was sent by the UI, and reads/writes normally from/to the database. I would have expected, that the Minio metadata is used the same way. So my new question is: is it intended, that the Minio metadata must be provided in the servers file system instead of the UIs Pipeline XML or is this a bug? |
Beta Was this translation helpful? Give feedback.
It's a bug that seems to be happening with multiple metadata types