New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logstash_mod makePickle doesn't convert to bytes #52980
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add a regression test so we can ensure this does not fail on python3 in the future?
I have Salt installed for development + I've followed:
So... I find that there is this magic |
hmm i havent see that error before. Can you paste the commands you used to install the test requiremenst and what OS are you running from? |
Sure
In virtualenv, I've performed roughly:
|
I wasn't able to recreate this traceback, but was able to track down thats because it was fixed on develop by this commit: fded9da Can you rebase with develop and check again? |
Thank you, using latest Could you please check if the test meets your needs (it's rather simple test)? Shall I add |
thanks for adding that test :) really appreciate it.
yes we want to ensure we have coverage for both changes. |
I've added ZMQ test Could you please check? |
thanks for taking the time to write that. Looks like there is just one lint error that needs cleaning up I would also like @dwoz to take a look at that zmq test to ensure everything gets shut down properly so it does not affect other tests. |
Yes, I forgot, now it should be fine, thank you |
Is it good to go and shall I merge or are there some more checks to be done (I've done the requested changes some time ago)? |
From what I gather from this answer on Stack Overflow, the socket should get cleaned up once the test gets disposed. If the tests aren't actually getting disposed then we should explicitly close the zmq socket/context. |
I've added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just noticed this, but that should be under probably tests/integration/log/handlers/test_logstash_mod.py
- probably need a couple of directories with __init__.py
in them.
This is setting up actual TCP/IP servers, so it should go under integration tests rather than unit tests.
But the integration tests start the Salt-Master and Minion which:
I understand that these tests are not exactly the 'unit' ones but how do I put them under integration and not start Minion and Master? |
We don't yet have our test suites running functional tests - this is something that the Test and Release Working Group, and @s0undt3ch in particular has been working on - which would definitely do what you're after. Sidenote: we just released this SEP as announced in our Office Hours this morning, and we're changing our branch/release strategy. The quickest way to get this merged in will be to rebase your changes on |
Thanks @s0undt3ch Is there a possibility to disable starting salt-master and salt-minion for particular Salt integration test? @waynew please advice if you still want to move the test under |
Not currently. |
They are, we just can't yet control config changes on the running master/minion. It's all or nothing. Soon, we'll have functional testing which is what's suited to your particular testing scenario. |
@s0undt3ch thanks |
@kiemlicz I think I'd rather (currently) see this under integration - while it's true that it will take longer to run the suite locally, the server startup only happens once when we do the full test run, so there really isn't extra overhead. You could put a TODO comment in there that explains that it should be moved to the |
Adding to bytes conversion
Adding tests that check whether pickling is successful
Adding tests that check whether pickling is successful
Adding tests that check whether pickling is successful for ZMQ
Fixing lint error
Closing sockets after tests run
Moving to integration tests suite
Please excuse the delay, I've moved the tests under |
Looks like this got out of date with master - I merged the latest changes in and we'll see if the build passes. |
…adTestModuleNamesTestCase.test_module_name_source_match
…_names.BadTestModuleNamesTestCase.test_module_name_source_match" This reverts commit 3b46ff6.
…essage is ambiguous
Only one failing build, but that was just due to infrastructure. I've restarted py2/amazon1, then this should be good to go 👍 |
I guess that I won't have permissions to perform the merge and write to the repo |
@kiemlicz still a couple failing tests and one of them didn't still have their results so restarted the tests. i'll try to monitor this one closely for ya to make sure we get it in soon. |
Thank you. |
yeah looks like those tests are timing out. I noticed the amazon tests are taking a much longer time to run then previously. I created this issue: #55852 so one of us can investigate why this is occurring but in the meantime a workaround was merged in to increase the timeout to 7 hours so i'll update your branch so the tests can run with this new fix so we makes sure all hte tests run on that OS. |
Removing usage of fetch unused port
This reverts commit 093b735.
What does this PR do?
Adds missing
to_bytes
conversion in logWhat issues does this PR fix or reference?
#51123
Previous Behavior
New Behavior
No stacktrace, message sent to logstash
Tests written?
No