Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEATURE: Hand message over via cache #26

Conversation

daniellienert
Copy link
Contributor

If you hand over some data (eg elasticsearch queue indexer with 500 nodes per job) to an isolated command run you exceed the escapeshellarg max input length. To avoid this, this change passes the payload via a separate cache.

Fixes: #25

Copy link
Member

@kdambekalns kdambekalns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice one! Looks good by reading…

@@ -0,0 +1,2 @@
FlowPackJobQueueCommon_MessageCache:
frontend: Neos\Cache\Frontend\VariableFrontend
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest to add persistent: true here, because this should not be flushed / whatever. It's a rare case that could happen during a deployment or flush, but anyway...

@hlubek
Copy link
Member

hlubek commented Feb 8, 2018

Looks good, just added a note about using a persistent cache to prevent flush race conditions.

@hlubek hlubek closed this Feb 8, 2018
@hlubek hlubek reopened this Feb 8, 2018
@daniellienert daniellienert force-pushed the feature-save-message-in-separate-cache branch from 1546561 to f4daa7d Compare February 8, 2018 21:01
@daniellienert
Copy link
Contributor Author

Thanks @hlubek - I guess the cases are really rare - but persistent does no harm.

@daniellienert daniellienert merged commit 7f50205 into Flowpack:master Feb 8, 2018
@kdambekalns
Copy link
Member

Yeah!

@bwaidelich
Copy link
Contributor

bwaidelich commented Jul 28, 2022

I'm 4 years late to the party, but I'm afraid that this change introduced a weak spot into the architecture.
It was already hard to debug the job queue but this added complexity and a potential source of failure.

No hard feelings (obviously) but I wonder how we could solve the core issue otherwise.
Maybe we could limit the max size of a job somehow.
Putting 500 nodes into a single job defeats the purpose of a job queue IMO.

We should put this behind a feature flag at least I would say

@hlubek
Copy link
Member

hlubek commented Jul 28, 2022

If it's about the isolated scenario: why not use STDIN to pass the data?
I agree the cache part introduces redundancy in data storage for the job queue - which will probably break somewhere.

@bwaidelich
Copy link
Contributor

bwaidelich commented Jul 28, 2022

why not use STDIN to pass the data?

Great idea!
Not sure if we can make that work with Scripts::executeCommand() but using something more sophisticated like react/child-process might make sense anyways in order to have more control over the execution

@kitsunet
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Message passing via command argument exceeds maximum length of escapeshellarg
5 participants