Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added scripts for testing functionality #306

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sporks5000
Copy link

We were talking about standardized testing the other day, so I thought I would throw something together to help make it easier to be confident that things are functional across all systems.

What I have here isn't comprehensive, but it covers all of the major changes that you've pulled from me as well as basic functionality for scanning and monitoring. I wanted to add something for cleaning, restoring, and quarantining as well, but I was short on time and I figured that it was better to get something out there than nothing at all.

The README.md file covers all the details, but the basic gist is: you run /usr/local/maldetect/tests/test.sh and it loops through all of the test scripts that are present in the directory and stops when any of them fail, or when it reaches the end.

I've used this to test on the following operating systems, both with and without clamscan and inotifywait, and all have reported successful for the tests that are present so far:

  • CentOS 6
  • CentOS 7
  • Debian 8
  • Ubuntu 14.4
  • Ubuntu 16.4
  • Ubuntu 18.4

I did run into a few instances where the monitoring related tests were failing because "ed" was not installed, but I think that that's more an issue with the base images I have access to being weird than anything else.

Let me know if you have any questions I can address.

@rfxn
Copy link
Owner

rfxn commented Sep 9, 2018

@sporks5000 This is fantastic. At a glance, it is a solid framework and I really appreciate the contribution and time you've committed. I'll be reviewing tonight / tomorrow :)

@sporks5000
Copy link
Author

sporks5000 commented Sep 9, 2018

It's rough in some areas, but there's really no such thing as code that can't be improved upon, so that's fine. If I had more time I'd redo it so that it matched with the style of the rest of the project better, but I have a few other deadlines coming up, so I wanted to do this in a style that I was most familiar with in order to get it done more quickly.

I see that codacy recommends changing

v_NAME="$( echo "$i" | sed "s/^skipped_//" )"

to

v_NAME="${i/^skipped_//}"

I know that that would work in current versions of Bash - If it's fully backwards compatible with older versions, it makes the most sense to change it, but I'll admit that I'm not sure off the top of my head.

@sporks5000
Copy link
Author

Also, I have come to the conclusion that I like this so much that I'm going to be incorporating it into a few of my other projects as well. Thank you for inspiring me to write the basics of this!

@sporks5000
Copy link
Author

A note:

Most of the tests that I've put together here rely on a process of copying files and modifying them with sed statements in order to test their functionality. There are instances where this is a little messy and might have the potential (after some number of unforeseen future changes to those files) to result in the tests failing to work as anticipated.

When I was writing these tests, the solution that I was anticipating to this was that when a test failed, we would investigate why it was failing - if it turned out to be an unexpected side effect of the portions of the code that was changed, we would either modify the new code to make the test succede, or make the call that the benefits of the new code outweighed the benefits of maintaining old functionality; if it turned out that the code was succeeding but the test itself was failing to measure the same things because of the change in the code, we would imply change the test. This is feasible, but it begs the question "How will we detect when a test is resulting in a false success?"

An alternative to the method I used would be something along the lines of adding a "--test" flag and having portions of the script behave differently when that flag was used. This seems like it would achieve better consistency, but it would almost certainly do so at the expense of riddling the script with blocks of code that would rarely ever be used, that would slow the code down overall (even if just slightly), and that would potentially make the flow of the script more difficult to interpret for people who wanted to contribute in the future.

It's possible that a middle road could be struck here: If we were to surround every block of testing code with comments indicating that it was for testing, we could modify the install script to use sed statements that would remove those blocks (unless it was installed with a "--testing" flag perhaps), but honestly this sounds even more convoluted than just testing by copying files and using sed statements to modify existing code, and would always leave us with the question "How are we sure that the final version of the code is executing as expected after those blocks are removed?"

I honestly don't have a good answer to which of these three solutions is the BEST, but regardless of the implementation that's chosen, I do feel that creating standard tests in order to verify functionality in all environments is something that this project needs. It would increase the level of confidence for the developers to push new versions, and it would increase the confidence the end user has that the script is performing the duties that they expect it to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants