Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logstash_mod makePickle doesn't convert to bytes #52980

Merged
merged 17 commits into from Jan 16, 2020
Merged

Conversation

kiemlicz
Copy link
Contributor

What does this PR do?

Adds missing to_bytes conversion in log

What issues does this PR fix or reference?

#51123

Previous Behavior

--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib/python3.6/logging/handlers.py", line 634, in emit
    self.send(s)
  File "/usr/lib/python3.6/logging/handlers.py", line 692, in send
    self.sock.sendto(s, self.address)
TypeError: a bytes-like object is required, not 'str'

New Behavior

No stacktrace, message sent to logstash

Tests written?

No

Copy link
Contributor

@Ch3LL Ch3LL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a regression test so we can ensure this does not fail on python3 in the future?

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Jun 4, 2019

I have Salt installed for development + I've followed:
https://docs.saltstack.com/en/latest/topics/development/tests/index.html#running-test-subsections
Running any test (branch develop), yields:

> python3 tests/runtests.py --unit
Traceback (most recent call last):
  File "tests/runtests.py", line 62, in <module>
    from tests.integration import TestDaemon  # pylint: disable=W0403
  File "/home/stanislaw/projects/open-source/salt/tests/integration/__init__.py", line 36, in <module>
    from tests.support.case import ShellTestCase
  File "/home/stanislaw/projects/open-source/salt/tests/support/case.py", line 35, in <module>
    from tests.support.mixins import AdaptedConfigurationTestCaseMixin, SaltClientTestCaseMixin
  File "/home/stanislaw/projects/open-source/salt/tests/support/mixins.py", line 33, in <module>
    import salt.config
  File "/home/stanislaw/projects/open-source/salt/salt/config/__init__.py", line 27, in <module>
    import salt.utils.network
  File "/home/stanislaw/projects/open-source/salt/salt/utils/network.py", line 24, in <module>
    import salt.utils.win_network
  File "/home/stanislaw/projects/open-source/salt/salt/utils/win_network.py", line 42, in <module>
    USE_WMI = StrictVersion(platform.version()) < StrictVersion('6.2')
  File "/usr/lib/python3.5/distutils/version.py", line 40, in __init__
    self.parse(vstring)
  File "/home/stanislaw/projects/open-source/salt/salt/utils/versions.py", line 36, in parse
    _StrictVersion.parse(self, vstring)
  File "/usr/lib/python3.5/distutils/version.py", line 137, in parse
    raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number '#1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08)'

So... I find that there is this magic _version.py that obviously I need to setup, but I have no luck with properly creating it.
How do I create salt/_version.py locally so that I can proceed?

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 6, 2019

hmm i havent see that error before. Can you paste the commands you used to install the test requiremenst and what OS are you running from?

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Jun 6, 2019

Sure
First: It's a develop branch
My OS it's a debian 9.9

> uname -ra
Linux my-hostname-here 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux

In virtualenv, I've performed roughly:

salt_dir > pip3 install pyzmq PyYAML pycrypto msgpack-python jinja2 psutil futures tornado
salt_dir > pip3 install -e .
salt_dir > pip3 install -r requirements/dev_python34.txt
salt_dir > pip3 install -r requirements/pytest.txt
salt_dir > pip3 install -r requirements/tests.txt

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 12, 2019

I wasn't able to recreate this traceback, but was able to track down thats because it was fixed on develop by this commit: fded9da

Can you rebase with develop and check again?

@kiemlicz
Copy link
Contributor Author

Thank you, using latest develop works!
I've added simple unit test that tests if the log is successfully pickled, sent and received

Could you please check if the test meets your needs (it's rather simple test)?

Shall I add ZMQLogstashHandler test as well?
However since the error occurs right before hitting the wire I need (?) some ZQM test receiver.
Given I have created one (or there is some other approach I should undertake?), should I move such a test to the integration tests (it adds some runtime overhead)?

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 25, 2019

thanks for adding that test :) really appreciate it.

Shall I add ZMQLogstashHandler test as well?

yes we want to ensure we have coverage for both changes.

@kiemlicz
Copy link
Contributor Author

I've added ZMQ test
I must admit though I'm not proud of that test, I couldn't send/receive the very first message,
thus I created 'loopy' way: since I want to test the pickling not the ZMQ

Could you please check?

@Ch3LL
Copy link
Contributor

Ch3LL commented Jul 23, 2019

thanks for taking the time to write that. Looks like there is just one lint error that needs cleaning up

I would also like @dwoz to take a look at that zmq test to ensure everything gets shut down properly so it does not affect other tests.

@Ch3LL Ch3LL requested a review from dwoz July 23, 2019 14:48
@kiemlicz kiemlicz requested a review from a team as a code owner July 23, 2019 18:40
@kiemlicz
Copy link
Contributor Author

Yes, I forgot, now it should be fine, thank you

@kiemlicz
Copy link
Contributor Author

Is it good to go and shall I merge or are there some more checks to be done (I've done the requested changes some time ago)?

@waynew
Copy link
Contributor

waynew commented Aug 16, 2019

From what I gather from this answer on Stack Overflow, the socket should get cleaned up once the test gets disposed. If the tests aren't actually getting disposed then we should explicitly close the zmq socket/context.

@kiemlicz
Copy link
Contributor Author

I've added close() and term() calls to tearDown. Is it ok?

Copy link
Contributor

@waynew waynew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just noticed this, but that should be under probably tests/integration/log/handlers/test_logstash_mod.py - probably need a couple of directories with __init__.py in them.

This is setting up actual TCP/IP servers, so it should go under integration tests rather than unit tests.

@kiemlicz
Copy link
Contributor Author

But the integration tests start the Salt-Master and Minion which:

  • vastly increases the runtime
  • I don't need Master and Minion running

I understand that these tests are not exactly the 'unit' ones but how do I put them under integration and not start Minion and Master?

@waynew
Copy link
Contributor

waynew commented Oct 1, 2019

We don't yet have our test suites running functional tests - this is something that the Test and Release Working Group, and @s0undt3ch in particular has been working on - which would definitely do what you're after.


Sidenote: we just released this SEP as announced in our Office Hours this morning, and we're changing our branch/release strategy.

The quickest way to get this merged in will be to rebase your changes on master, force push to your fork, and edit this PR to point to master instead.

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Oct 1, 2019

Thanks
So the integration tests are not yet running (in some kind of CI) but eventually they will?

@s0undt3ch Is there a possibility to disable starting salt-master and salt-minion for particular Salt integration test?

@waynew please advice if you still want to move the test under integration even in case there is no possibility to not run salt-master/minion in integration tests (please mind that runtime impact is huge)

@s0undt3ch
Copy link
Member

Is there a possibility to disable starting salt-master and salt-minion for particular Salt integration test?

Not currently.

@s0undt3ch
Copy link
Member

So the integration tests are not yet running (in some kind of CI) but eventually they will?

They are, we just can't yet control config changes on the running master/minion. It's all or nothing.

Soon, we'll have functional testing which is what's suited to your particular testing scenario.

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Oct 1, 2019

@s0undt3ch thanks
@waynew please advice if you still want this under integration package (mind the runtime impact)

@waynew
Copy link
Contributor

waynew commented Oct 1, 2019

@kiemlicz I think I'd rather (currently) see this under integration - while it's true that it will take longer to run the suite locally, the server startup only happens once when we do the full test run, so there really isn't extra overhead.

You could put a TODO comment in there that explains that it should be moved to the functional suite, when that becomes available.

Adding tests that check whether pickling is successful
Adding tests that check whether pickling is successful
Adding tests that check whether pickling is successful for ZMQ
Closing sockets after tests run
@kiemlicz kiemlicz changed the base branch from develop to master November 17, 2019 19:52
Moving to integration tests suite
@kiemlicz
Copy link
Contributor Author

Please excuse the delay, I've moved the tests under integration suite. Could you please verify?

@kiemlicz kiemlicz requested review from waynew and Ch3LL December 4, 2019 17:48
@waynew
Copy link
Contributor

waynew commented Dec 27, 2019

Looks like this got out of date with master - I merged the latest changes in and we'll see if the build passes.

…adTestModuleNamesTestCase.test_module_name_source_match
…_names.BadTestModuleNamesTestCase.test_module_name_source_match"

This reverts commit 3b46ff6.
@waynew
Copy link
Contributor

waynew commented Jan 2, 2020

Only one failing build, but that was just due to infrastructure. I've restarted py2/amazon1, then this should be good to go 👍

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Jan 8, 2020

I guess that I won't have permissions to perform the merge and write to the repo
Will someone with the permissions be able to merge this?

@Ch3LL
Copy link
Contributor

Ch3LL commented Jan 9, 2020

@kiemlicz still a couple failing tests and one of them didn't still have their results so restarted the tests. i'll try to monitor this one closely for ya to make sure we get it in soon.

@kiemlicz
Copy link
Contributor Author

kiemlicz commented Jan 11, 2020

Thank you.
I've checked the test results and I think that I didn't introduce breaking changes.
The master branch timeouts as well

@Ch3LL
Copy link
Contributor

Ch3LL commented Jan 13, 2020

yeah looks like those tests are timing out. I noticed the amazon tests are taking a much longer time to run then previously. I created this issue: #55852 so one of us can investigate why this is occurring but in the meantime a workaround was merged in to increase the timeout to 7 hours so i'll update your branch so the tests can run with this new fix so we makes sure all hte tests run on that OS.

@dwoz dwoz merged commit 1a0b0f7 into saltstack:master Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants