Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t/dash-x.t fails on Fedora in presence of the Lmod package #3

Open
eserte opened this issue Jan 11, 2020 · 9 comments
Open

t/dash-x.t fails on Fedora in presence of the Lmod package #3

eserte opened this issue Jan 11, 2020 · 9 comments

Comments

@eserte
Copy link

@eserte eserte commented Jan 11, 2020

The test suite fails on my fedora smoker:

#   Failed test 'basic bash -x works'
#   at t/dash-x.t line 14.
# STDERR is:
# + '[' -z '' ']'
# + case "$-" in
# + __lmod_vx=x
# + '[' -n x ']'
# + set +x
# Shell debugging temporarily silenced: export LMOD_SH_DBG_ON=1 for this output (/usr/share/lmod/lmod/init/bash)
# Shell debugging restarted
# + unset __lmod_vx
# + /usr/bin/perl -e 'exit 0'
# 
# not:
# + /usr/bin/perl -e 'exit 0'
# 
# as expected
# Looks like you failed 1 test of 1.
t/dash-x.t ........ 
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests 

Reproducible in docker using the following Dockerfile:

FROM fedora:31

RUN yum -y install perl-CPAN
RUN yum -y install perl-Capture-Tiny perl-Path-Tiny
RUN yum -y install perl-Test-Simple perl-IPC-System-Simple perl-Want
RUN yum -y install Lmod
RUN yum -y install perl-Test-Most perl-Test-Output
RUN yum -y install perl-Test-Command perl-Contextual-Return
CMD true

Then run:

docker build -t perl-test . && docker run -it perl-test bash
cpan -t BAREFOOT/PerlX-bash-0.04.tar.gz < /dev/null

After doing yum remove Lmod the test suite passes again.

@barefootcoder

This comment has been minimized.

Copy link
Owner

@barefootcoder barefootcoder commented Jan 15, 2020

Interesting. Well, I didn't see anything concrete in lmod's docs, but would it be safe to assume that just setting $ENV{LMOD_SH_DBG_ON} = 1 before running the tests would fix it?

@eserte

This comment has been minimized.

Copy link
Author

@eserte eserte commented Jan 15, 2020

Output gets noisier with the environment variable set:

$ env LMOD_SH_DBG_ON=1 cpan_smoke_modules -pv 5.30.1 PerlX::bash
...
#   Failed test 'basic bash -x works'
#   at t/dash-x.t line 14.
# STDERR is:
# + '[' -z x ']'
# + '[' -n '' ']'
# + export LMOD_ROOT=/usr/share/lmod
# + LMOD_ROOT=/usr/share/lmod
# + export LMOD_PKG=/usr/share/lmod/lmod
# + LMOD_PKG=/usr/share/lmod/lmod
# + '[' -n '' ']'
# + export LMOD_DIR=/usr/share/lmod/lmod/libexec
# + LMOD_DIR=/usr/share/lmod/lmod/libexec
# + export LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod
# + LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod
# + export MODULESHOME=/usr/share/lmod/lmod
# + MODULESHOME=/usr/share/lmod/lmod
# + export LMOD_SETTARG_FULL_SUPPORT=no
# + LMOD_SETTARG_FULL_SUPPORT=no
# + '[' no = no ']'
# + LMOD_VERSION=8.1.17
# + export LMOD_VERSION
# + '[' : '!=' : ']'
# + unalias ml
# + true
# + '[' -n '5.0.11(1)-release' -a yes '!=' no ']'
# + export -f module
# + export -f ml
# + unset export_module
# + '[' 5 -ge 3 ']'
# + '[' -r /usr/share/lmod/lmod/init/lmod_bash_completions ']'
# + '[' -n '' ']'
# + '[' -n '' ']'
# + /opt/perl-5.30.1/bin/perl -e 'exit 0'
#
# not:
# + /opt/perl-5.30.1/bin/perl -e 'exit 0'
#
# as expected
# Looks like you failed 1 test of 1.
t/dash-x.t ........
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests
...
@barefootcoder

This comment has been minimized.

Copy link
Owner

@barefootcoder barefootcoder commented Jan 16, 2020

Ah, yes ... I see why they normally suppress it now. :-D

Well, I'm either going to have to skip that unit test file altogether under Lmod, or take extra steps in its presence to hack up the stderr capture. The former is certainly safer in terms of future test breakage, while the latter is safer in terms of present test coverage. I'm a bit on the fence, personally, so I welcome your input. Otherwise I'll just flip a coin in a day or so. ;->

@eserte

This comment has been minimized.

Copy link
Author

@eserte eserte commented Jan 16, 2020

Personally I would choose the path of increased test coverage. Capture stderr, check the contents for some "bad lines" (maybe there will be more in future), and if something matches, skip the test case, otherwise do an equality test. Or so.

@barefootcoder

This comment has been minimized.

Copy link
Owner

@barefootcoder barefootcoder commented Jan 16, 2020

I was thinking something along these lines (just spitballing here; utterly untested code ahead).

use Test::Output;

sub bash_debug_is (&$$)
{
    local $Test::Builder::Level = $Test::Builder::Level + 1;
    my ($code, $expected, $testname) = @_;

    my $stderr = stderr_from $code;
    if ($stderr =~ /Shell debugging temporarily silenced/)
    {   
        # looks like Lmod is installed
        $stderr =~ s/\A     
                            .*?
                            ^ Shell \s debugging \s restarted $
                        (?: ^ \+ \s unset \s .*?              $ )?
                    //msx;
    }
    is $stderr, $expected, $testname;
}

How about that? Gives a hook in case there are other needs to fiddle with debugging output in future and still runs the test, even under Lmod. I'll fiddle with it a bit to make sure it can handle your sample output above, but, assuming I have it doing what I think it'll do, does that seem like a sound approach?

@eserte

This comment has been minimized.

Copy link
Author

@eserte eserte commented Jan 17, 2020

Yes, this looks good.

@barefootcoder

This comment has been minimized.

Copy link
Owner

@barefootcoder barefootcoder commented Jan 17, 2020

Okay, I've got a version of the above that works, so I'll just whiz it on out there as soon as I can.

@barefootcoder

This comment has been minimized.

Copy link
Owner

@barefootcoder barefootcoder commented Jan 20, 2020

Okay, 0.04_01 seems to have fixed the problem, right? I'll promote to full when I get a chance.

@eserte

This comment has been minimized.

Copy link
Author

@eserte eserte commented Jan 23, 2020

Yes, looks good. Go with the new release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.