Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implicit close()s are silently unchecked for error #9436

Open
p5pRT opened this issue Aug 1, 2008 · 40 comments
Open

Implicit close()s are silently unchecked for error #9436

p5pRT opened this issue Aug 1, 2008 · 40 comments

Comments

@p5pRT
Copy link
Collaborator

@p5pRT p5pRT commented Aug 1, 2008

Migrated from rt.perl.org#57512 (status was 'open')

Searchable as RT57512$

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Aug 1, 2008

From tchrist@perl.com

Created by tchrist@chthon.perl.com

  ( Perhaps the severity of this bug should be "high",
  not "medium"? Feel free to upgrade the status if you
  find yourself agreeing with me. )

When Perl implicitly closes a filehandle, it neglects to detect and then
duly notify the user of having failed to do so correctly. This leads to
potentially severe errors going undetected. It needs to be fixed.

Perl implicitly closes a filehandle under the following circumstances​:

  (1) When you reopen a handle that's already open. The return
  value reflects only the success of the open, not that of a
  failed close. This may include the ARGVOUT (perl -I, $^I)
  situation, which also goes astonishingly unchecked for error,
  surely a mistake.

  (2) When an implicitly allocated but still open filehandle
  gets GC'd. This includes both

  open(my $fh, "> /tmp/foo.$$"); # please use File​::Temp

  style filehandles as well as those in the localized
  typeglob, such as​:

  local *FH;

  (3) Prior to certain critical operations, such as fork(), flock(),
  and perhaps seek()?, where you don't want a dirty or duplicate
  I/O buffer, an implicit fflush() executes. The close-on-exec
  flag will close them at exec(), but they needed to be already
  flushed at fork() time.

  (4) During global destruction, Perl chases down all the open handles
  and fflush()es and fclose()s them. Cf exit() vs _exit() in
  libc. This failure to report errors is most egregious on
  STDOUT, as every program that needs to operate correctly (and
  which ones don't?) needs must install

  END { close(STDOUT) || die "can't close STDOUT​: $!" }

  Since this is all-but-compulsary on any program you want to
  behave correctly, it needs to be in all programs; hence, it
  needs to be in the run-time.

It is not obvious the nature of the warning or error to report. Some
situations may call for some level, others for others. It is possible
that all but the final case should be a warning only. I tend to think
this should be of class (S io), that is, a severe, on-by-default
warning of class io.

However, the global destruction errors, *particularly* of STDOUT, are
something else. I believe that a failure to close STDOUT at that point
(maybe at 2, too?) should and must cause the program's exit status to
be non-zero, for it has failed just as much as a broken-piped program
has failed, and *those* program die of SIGPIPE, exiting thus 141, not 0.

Below is the relevant portion of the originating discussion of this that
I sent to p5p. In it, you see how when the cat program fails in this
way, it indeed causes the very program to fail. I feel perl should do
at least as much.

  Subject​: Re​: Creative and *routine* use of so-called "magic" ARGV (was [perl #2783] Security of ARGV using 2-argument open)
  In-Reply-To​: Message from Abigail <abigail@​abigail.be>
  of "Tue, 29 Jul 2008 09​:29​:13 +0200." <20080729072913.GM30221@​almanda>
  Date​: Tue, 29 Jul 2008 15​:19​:34 -0600
  Message-ID​: <29912.1217366374@​chthon>
  From​: Tom Christiansen <tchrist@​chthon>

Watch​:

  % df -h .
  Filesystem Size Used Avail Capacity Mounted on
  /dev/wd0a 124M 117M -5.0K 100% /

  % ls -l it*
  -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it

  % perl -i.orig -pe 'print "this is more stuff\n"' it

  % echo $?
  0

  % ls -l it*
  0 -rw-r--r-- 1 tchrist wheel 0 Jul 29 15​:05 it
  0 -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it.orig

To this day, Perl's implicit closing of files doesn't warn you of
errors, let alone exit nonzero. This makes it do wrong thing and
not even tell you it did them wrong. This is a *true* problem,
because checking for the success of print() is neither necessary
nor sufficient to detect the success of print(). Yes, you read
that correctly. It's because of buffering, plus the persistence
of the err flag on the file structure.

I've never convinced anybody this is important. Since *every*
program should do this for correctness, it has to be in the run-time
system to avoid it ever being forgotten. Sure, there's stuff like
this you can do​:

  END { close(STDOUT) || die "can't close stdout​: $!" }

But that sort of thing should happen on all implicitly closed things. And
it really must. Even IO​::Handle​::close doesn't bother. Perl knows what
handles it's flushing closing during global destruction, at least if you
don't just blindly fflush(0).

Watch again, starting from before the bogus, ill-reported rename attempt​:

  % df -h .
  Filesystem Size Used Avail Capacity Mounted on
  /dev/wd0a 124M 117M -5.0K 100% /

  % ls -l it
  -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it

  % perl -e 'print "stuff\n"' >> it; echo $?
  0

  % perl -e 'open(my $fh, ">>it") || die "open $!"; print $fh "stuff\n"; print STDOUT "ok now\n"'
  ok now

  % echo $?
  0

  % ls -l it
  -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it

This is all incorrect behavior on Perl's part. Even cat knows better!

  % echo foo | cat >> it; echo $?
  cat​: stdout​: No space left on device
  1

+-------------------------------------------------------+
| Notice how even my cat is smarter than your perl!? :( |
+-------------------------------------------------------+

What's that about, eh?

But I've been saying this for years, just like everything else.
Makes no difference. Depressing, eh?

BTW, this proves my point about checking for print's status
being a waste of time, neither necessary nor sufficient to
catch failed prints!

  % perl -e 'open(my $fh, ">>it") || die "open $!"; print $fh "stuff\n" or die "print $!"; print STDOUT "ok now\n"'; echo exit status was $?
  ok now
  exit status was 0

And you can't get the danged thing to detect its folly​:

  % perl -WE 'open(my $fh, ">>it") || die "open $!"; say $fh "stuff" or die "print $!"; say "ok now"' ; echo exit status was $?
  ok now
  exit status was 0

I firmly believe that this needs to be a part of *EVERY* Perl program,
which consequently means it should *not* be there, but in the core itself​:

  END { close(STDOUT) || die "can't close stdout​: $!" }

And I believe this *much* more than some here believe <ARGV> needs
to implicitly map {"< $_\0"} @​ARGV (since that breaks existing
working code, whereas mine fixes existing broken code). Furthermore,
I also believe all *IMPLICIT* closes that fail need to generate a
mandatory io warning, but that of STDOUT should be a fatal. I've
believed all this for a long time; said it often enough. Anything
else is just plain wrong behavior. I'm not quite sure whether
ARGVOUT failure should be a warning or a fatal, but it should be
*something* suitably noisy.

Perl Info

Flags:
    category=core
    severity=medium

Site configuration information for perl 5.10.0:

Configured by tchrist at Mon May 19 00:58:33 MDT 2008.

Summary of my perl5 (revision 5 version 10 subversion 0) configuration:
  Platform:
    osname=openbsd, osvers=3.7, archname=OpenBSD.i386-openbsd-thread-multi-64int
    uname='openbsd chthon 3.7 generic#50 i386 '
    config_args='-Dusethreads -Duse64bitint -O -des'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=define, use64bitall=undef, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='cc', ccflags ='-pthread -fno-strict-aliasing -pipe -I/usr/local/include',
    optimize='-O2',
    cppflags='-pthread -fno-strict-aliasing -pipe -I/usr/local/include'
    ccversion='', gccversion='3.3.5 (propolice)', gccosandvers='openbsd3.7'
    intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=12345678
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
    ivtype='long long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=4, prototype=define
  Linker and Libraries:
    ld='cc', ldflags ='-pthread -Wl,-E  -L/usr/local/lib'
    libpth=/usr/local/lib /usr/lib
    libs=-lm -lutil -lc
    perllibs=-lm -lutil -lc
    libc=/usr/lib/libc.so.34.2, so=so, useshrplib=false, libperl=libperl.a
    gnulibc_version=''
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
    cccdlflags='-DPIC -fPIC ', lddlflags='-shared -fPIC  -L/usr/local/lib'

Locally applied patches:
    


@INC for perl 5.10.0:
    /usr/local/lib/perl5/5.10.0/OpenBSD.i386-openbsd-thread-multi-64int
    /usr/local/lib/perl5/5.10.0
    /usr/local/lib/perl5/site_perl/5.10.0/OpenBSD.i386-openbsd-thread-multi-64int
    /usr/local/lib/perl5/site_perl/5.10.0
    /usr/local/lib/perl5/site_perl/5.8.7
    /usr/local/lib/perl5/site_perl/5.8.0
    /usr/local/lib/perl5/site_perl/5.6.0
    /usr/local/lib/perl5/site_perl/5.005
    /usr/local/lib/perl5/site_perl
    .


Environment for perl 5.10.0:
    HOME=/home/tchrist
    LANG (unset)
    LANGUAGE (unset)
    LD_LIBRARY_PATH=/usr/local/lib:/usr/lib:/usr/lib/X11:/usr/openwin/lib:/mnt/usr/local/lib:/mnt/usr/lib:/mnt/usr/lib/X11:/mnt/usr/openwin/lib
    LOGDIR (unset)
    PATH=/home/tchrist/scripts:/usr/local/etc:/usr/local/bin:/usr/local/sbin:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/games:/usr/local/apache/bin:/usr/games:/usr/X11R6/bin:.
    PERL_BADLANG (unset)
    SHELL=/bin/tcsh

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 26, 2010

From mmaslano@redhat.com

Created by mmaslano@redhat.com

Summary​: data are lost if there's not enough space on device.
'perl -i' doesn't check return value (ENOSPC).

Original bug report​: https://bugzilla.redhat.com/show_bug.cgi?id=576929

Original report from Jim Meyering follows​:
Description of problem​:
perl -i is very handy. It allows me to filter a file and replace it with the
result. I can even use the -i.bak option to make a backup.
However, if while writing the new version (with or without .bak) perl
encounters a write error, it silently ignores it and destroys the original file
regardless of the fact that the "filtered" replacement is incomplete.
Even if I use .bak, and hence create a backup of the original, perl's failure
makes it appear that the filtering process has completed normally, and the
unsuspecting user may well remove the backup file.

Version-Release number of selected component (if applicable)​:
perl-5.10.0-87.fc12.x86_64

How reproducible​: always (with a nearly-full partition)

I've just done the following on a nearly-full partition​:

Start off with a two-byte file​:

  $ echo b > j

Use "perl -i" to filter that "in-place", increasing its size​:

  $ perl -ni -e '/b/ and print "."x100000 . "\nlast\n"' j \
  && echo 'success, supposedly'
  success, supposedly

Perl didn't report any problem and has exited successfully.
Sounds good. But.
Check out the size of the result​:

  $ wc -c j
  65536 j
That certainly looks wrong. 65536 (aka 64KiB) < 100000 + 6.
We should see that final "last" line.

  $ tail -c 10 j; echo
  ..........

Confirmed. Truncated file, and write failure not reported.

The above was done both with Fedora 12's v5.10.0
and with blead (built minutes ago, v5.12.0-RC0-6-gd419ab1).
Rerunning with "strace -e write perl ..., I see this​:

  write(4, "................................"..., 4096) = 4096
  ...(15 others, identical)...
  write(4, "................................"..., 4096) = -1 ENOSPC (No space
left on device)
  ...(7 others, identical)...
  write(4, "................................"..., 1702) = -1 ENOSPC (No space
left on device)

Steps to Reproduce​: as above

Actual results​: as above

Expected results​: perl reports the failure and does not destroy the input

Perl Info

Flags:
    category=library
    severity=medium
    module=base

This perlbug was built using Perl 5.10.1 in the Fedora build system.
It is being executed now by Perl 5.10.1 - Sun Mar  7 06:02:09 UTC 2010.

Site configuration information for perl 5.10.1:

Configured by Red Hat, Inc. at Sun Mar  7 06:02:09 UTC 2010.

Summary of my perl5 (revision 5 version 10 subversion 1) configuration:

  Platform:
    osname=linux, osvers=2.6.18-164.6.1.el5, archname=x86_64-linux-thread-multi
    uname='linux x86-02.phx2.fedoraproject.org 2.6.18-164.6.1.el5 #1 smp tue oct 27 11:28:30 edt 2009 x86_64 x86_64 x86_64 gnulinux '
    config_args='-des -Doptimize=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -DDEBUGGING=-g -Accflags=-DPERL_USE_SAFE_PUTENV -Dversion=5.10.1 -Dmyhostname=localhost -Dperladmin=root@localhost -Dcc=gcc -Dcf_by=Red Hat, Inc. -Dprefix=/usr -Dvendorprefix=/usr -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl5 -Dsitearch=/usr/local/lib64/perl5 -Dprivlib=/usr/share/perl5 -Dvendorlib=/usr/share/perl5 -Darchlib=/usr/lib64/perl5 -Dvendorarch=/usr/lib64/perl5 -Dinc_version_list=5.10.0 -Darchname=x86_64-linux-thread-multi -Dlibpth=/usr/local/lib64 /lib64 /usr/lib64 -Duseshrplib -Dusethreads -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio -Dinstallusrbinperl=n -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less -isr -Dd_gethostent_r_proto -Ud_endhostent_r_proto -Ud_sethostent_r_proto -Ud_endprotoent_r_proto -Ud_setprotoent_r_proto -Ud_endservent_r_proto -Ud_setservent_r_proto -Dscriptdir=/usr/bin -Dotherlibdirs=/usr/local/lib64/perl5/site_perl/5.10.0/x86_64-linux-thread-multi:/usr/local/lib/perl5/site_perl/5.10.0:/usr/lib64/perl5/vendor_perl/5.10.0/x86_64-linux-thread-multi:/usr/lib/perl5/vendor_perl:/usr/lib/perl5/site_perl'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=define, use64bitall=define, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DPERL_USE_SAFE_PUTENV -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
    optimize='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic',
    cppflags='-D_REENTRANT -D_GNU_SOURCE -DPERL_USE_SAFE_PUTENV -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include'
    ccversion='', gccversion='4.4.3 20100211 (Red Hat 4.4.3-6)', gccosandvers=''
    intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
    ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=8, prototype=define
  Linker and Libraries:
    ld='gcc', ldflags =' -fstack-protector'
    libpth=/usr/local/lib64 /lib64 /usr/lib64
    libs=-lresolv -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc
    perllibs=-lresolv -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc
    libc=, so=so, useshrplib=true, libperl=libperl.so
    gnulibc_version='2.11.90'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E -Wl,-rpath,/usr/lib64/perl5/CORE'
    cccdlflags='-fPIC', lddlflags='-shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

Locally applied patches:



@INC for perl 5.10.1:
    /usr/local/lib64/perl5
    /usr/local/share/perl5
    /usr/local/share/perl5
    /usr/lib64/perl5
    /usr/share/perl5
    /usr/share/perl5
    /usr/lib64/perl5
    /usr/share/perl5
    /usr/local/lib64/perl5/site_perl/5.10.0/x86_64-linux-thread-multi
    /usr/local/lib/perl5/site_perl/5.10.0
    /usr/lib64/perl5/vendor_perl/5.10.0/x86_64-linux-thread-multi
    /usr/lib/perl5/vendor_perl/5.10.0
    /usr/lib/perl5/vendor_perl
    /usr/lib/perl5/site_perl
    .


Environment for perl 5.10.1:
    HOME=/home/marca
    LANG=en_US.UTF-8
    LANGUAGE=
    LD_LIBRARY_PATH (unset)
    LOGDIR (unset)
    PATH=/usr/lib64/ccache:/usr/lib64/ccache:/usr/lib64/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib64/ccache:/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/sbin:/usr/sbin:/sbin:/home/marca/bin
    PERL_BADLANG (unset)
    SHELL=/bin/bash            


@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2010

From jim@meyering.net

Summary​: data are lost if there's not enough space on device.
'perl -i' doesn't check return value (ENOSPC).

Original bug report​:
https://bugzilla.redhat.com/show_bug.cgi?id=576929

I reported this in March, and it was forwarded to here​:

  http​://rt.perl.org/rt3/Public/Bug/Display.html?id=73830

I've just confirmed that 5.12.1 still suffers data loss when using -i on
a partition that is nearly full. The same applies with any other write
error, like EIO.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2010

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Feb 16, 2013

From @kentfredric

Out of curiosity, are there any cases you can think of where a
filehandle failing to close properly is a non-critical problem?

Because I can't say I'd be incredibly fond of my program terminating
with a fatal error simply because a file handle close had a failure when
the filehandle went out of scope.

Sure, for such cases you could alternatively prescribe the use of an un-
checked explicit close.

It certainly seems to make sense to me that at least filehandles closed
during global destruction should probably raise fatal exceptions, but I
haven't thought through the logistics of what that might mean.

I can certainly expect that for some people, they might not care if
STDOUT failed to close properly when their program terminated, and
requiring them to work around an unexpected failure.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Feb 16, 2013

The RT System itself - Status changed from 'new' to 'open'

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Feb 16, 2013

From alex.hartmaier@gmail.com

On Sat, Feb 16, 2013 at 8​:24 PM, Kent Fredric via RT <
perlbug-followup@​perl.org> wrote​:

Out of curiosity, are there any cases you can think of where a
filehandle failing to close properly is a non-critical problem?

Because I can't say I'd be incredibly fond of my program terminating
with a fatal error simply because a file handle close had a failure when
the filehandle went out of scope.

Sure, for such cases you could alternatively prescribe the use of an un-
checked explicit close.

It certainly seems to make sense to me that at least filehandles closed
during global destruction should probably raise fatal exceptions, but I
haven't thought through the logistics of what that might mean.

I can certainly expect that for some people, they might not care if
STDOUT failed to close properly when their program terminated, and
requiring them to work around an unexpected failure.

---
via perlbug​: queue​: perl5 status​: new
https://rt-archive.perl.org/perl5/Ticket/Display.html?id=57512

As Lukas Mai already wrote for read-only filehandles. What should go wrong
when closing these?
Also I never understood why open/print/close doesn't throw an exception
instead of setting some weird variable, which isn't cleared if the
operation was successful.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Feb 16, 2013

From tchrist@perl.com

As Lukas Mai already wrote for read-only filehandles. What
should go wrong when closing these?

If the close does not reflect the error status on the handle, or if that
error status is wrong, then that doesn't leave much besides the failure of
the pipe close. In theory, there is EINTR, although I'm not quite sure how
that would happen.

Also I never understood why open/print/close doesn't throw an exception
instead of setting some weird variable, which isn't cleared if the
operation was successful.

It's because Perl just mirrors errno. Yeah, it's annoying. That's why
I was hoping for something better out of the error status.

--tom

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Feb 16, 2013

From tchrist@perl.com

"Kent Fredric via RT" <perlbug-followup@​perl.org> wrote
  on Sat, 16 Feb 2013 11​:24​:56 PST​:

Out of curiosity, are there any cases you can think of where a
filehandle failing to close properly is a non-critical problem?

No, not really.

Because I can't say I'd be incredibly fond of my program terminating
with a fatal error simply because a file handle close had a failure when
the filehandle went out of scope.

Sure, for such cases you could alternatively prescribe the use of an un-
checked explicit close.

It certainly seems to make sense to me that at least filehandles closed
during global destruction should probably raise fatal exceptions, but I
haven't thought through the logistics of what that might mean.

I can certainly expect that for some people, they might not care if
STDOUT failed to close properly when their program terminated, and
requiring them to work around an unexpected failure.

Just as it is not possible to get reasonable behavior if you
enable fatal warnings during compile time, it is probably a bad
idea to make an implicit close fatal if it was not before. That's
because programs might be implicitly relying on everything getting
flushed even if one of the early ones fails.

But that should not stop us from emitting a warning, I think.

I always thought autodie should include this though.

--tom

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 4, 2013

From @cpansprout

On Fri Aug 01 11​:02​:42 2008, tom christiansen wrote​:

\(4\) During global destruction\, Perl chases down all the open

handles
and fflush()es and fclose()s them. Cf exit() vs _exit() in
libc. This failure to report errors is most egregious on
STDOUT, as every program that needs to operate correctly (and
which ones don't?) needs must install

    END \{ close\(STDOUT\) || die "can't close STDOUT&#8203;: $\!" \}

    Since this is all\-but\-compulsary on any program you want to
    behave correctly\, it needs to be in all programs; hence\, it
    needs to be in the run\-time\.

Should it be just STDOUT, or should it also apply to ARGVOUT and the
selected filehandle?

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 4, 2013

From olga.kryzhanovska@gmail.com

EINTR on close() can happen easily. The ksh93 developers learned that
the hard way (and fixed it for ksh93 version 'u') that busy scripts
may leak file descriptors, if they wiggle a lot with fds.
SA_RESTART is only of little help, as for example Solaris had bugs in
the restart code on NFS file systems, prior to Solaris 11. Other
Unix/Linux variants had similar issues. Granted, the work around is to
require a recent OS patch set, but now that Oracle charges top money
for such patches the fixes may not be wide spread.

Olga

On Sat, Feb 16, 2013 at 9​:55 PM, Tom Christiansen <tchrist@​perl.com> wrote​:

As Lukas Mai already wrote for read-only filehandles. What
should go wrong when closing these?

If the close does not reflect the error status on the handle, or if that
error status is wrong, then that doesn't leave much besides the failure of
the pipe close. In theory, there is EINTR, although I'm not quite sure how
that would happen.

Also I never understood why open/print/close doesn't throw an exception
instead of setting some weird variable, which isn't cleared if the
operation was successful.

It's because Perl just mirrors errno. Yeah, it's annoying. That's why
I was hoping for something better out of the error status.

--tom

--
  , _ _ ,
  { \/`o;====- Olga Kryzhanovska -====;o`\/ }
.----'-/`-/ olga.kryzhanovska@​gmail.com \-`\-'----.
`'-..-| / http​://twitter.com/fleyta \ |-..-'`
  /\/\ Solaris/BSD//C/C++ programmer /\/\
  `--` `--`

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2013

From @cpansprout

On Fri Aug 01 11​:02​:42 2008, tom christiansen wrote​:

Furthermore,
I also believe all *IMPLICIT* closes that fail need to generate a
mandatory io warning

I have implemented that part on the sprout/destroio branch, and also
attached it as a patch. But it ends up flagging cases like this​:

my $pid = open my $fh, "| perl -e0";
waitpid $pid, 0;
close $fh or die $!;
__END__

Output​:
No child processes at - line 3.

I�m not so sure whether that should warn or not. And this is starting
to get out of my depth, since I am not very familiar with I/O.

I would appreciate it if someone else could take over here, or at the
very least give advice as to which situations should warn and which
should not.

Just running the test suite on that branch, flagging all the tests that
produce the �Error closing handle...� warning, and reducing them to
three-liners (as above), would be helpful it seeing what situations we
need to consider.

Come to think of it, that requires no C knowledge. So lets add this to
ticket #116469. (Obviously it only applies to the research aspect of
this ticket.)

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2013

From @cpansprout

From 628daf8 Mon Sep 17 00​:00​:00 2001
From​: Father Chrysostomos <sprout@​cpan.org>
Date​: Wed, 3 Jul 2013 08​:48​:23 -0700
Subject​: [PATCH] [perl #57512] Warnings for implicitly closed handles
MIME-Version​: 1.0
Content-Type​: text/plain; charset=UTF-8
Content-Transfer-Encoding​: 8bit

If the implicit close() fails, warn about it, mentioning $! in the
message. This is a default warning in the io category.

When compiling with -Accflags=-DPERL_NOISY_IO_DESTROY the warning
�Handle implicitly closed� applies to any handle that is successfully
implicitly closed. Enabling this warning in general provides obnox-
iously verbose output, since this practice is so widespread. I
thought leaving it in might be useful as a tool to make the core Perl
code more robust against I/O errors. It probably should not be docu-
mented, as such a tool for general use is better implemented as in a
CPAN module that overrides IO​::Handle​::DESTROY.

This currently produces warnings for code like this​:

my $pid = open my $fh, "| perl -e0";
waitpid $pid, 0;
close $fh or die $!;
__END__

Output​:
No child processes at - line 3.

I�m not so sure that should warn. I am not sufficiently versed in
I/O to know when we do and do not want to warn, so I cannot proceed
much further with this.

Inline Patch
diff --git a/dist/IO/lib/IO/Handle.pm b/dist/IO/lib/IO/Handle.pm
index aebf74e..dbc7a91 100644
--- a/dist/IO/lib/IO/Handle.pm
+++ b/dist/IO/lib/IO/Handle.pm
@@ -339,12 +339,16 @@ sub new_from_fd {
 }
 
 #
-# There is no need for DESTROY to do anything, because when the
+# Under perl 5.19.2 and higher, the perl core itself defines a
+# DESTROY method.
+#
+# Under other perl versions, there is no
+# need for DESTROY to do anything, because when the
 # last reference to an IO object is gone, Perl automatically
 # closes its associated files (if any).  However, to avoid any
 # attempts to autoload DESTROY, we here define it to do nothing.
 #
-sub DESTROY {}
+if ($] < 5.019002) { eval 'sub DESTROY {}' }
 
 
 ################################################
diff --git a/pod/perldiag.pod b/pod/perldiag.pod
index fa9d7f2..93bfd9d 100644
--- a/pod/perldiag.pod
+++ b/pod/perldiag.pod
@@ -1825,6 +1825,12 @@ effective uids or gids failed.
 aliased to another hash, so it doesn't reflect anymore the state of the
 program's environment.  This is potentially insecure.
 
+=item Error closing handle: %s
+
+=item Error closing handle %s: %s
+
+(S io) An error occurred when Perl implicitly closed a filehandle.
+
 =item Error converting file specification %s
 
 (F) An error peculiar to VMS.  Because Perl may have to deal with file
diff --git a/t/run/switchd.t b/t/run/switchd.t
index 4334262..7b58d05 100644
--- a/t/run/switchd.t
+++ b/t/run/switchd.t
@@ -36,7 +36,7 @@ __SWDTEST__
 		 args => ['3'],
 		);
     like($r,
-qr/^sub<Devel::switchd::import>;import<Devel::switchd>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;$/,
+qr/^sub<Devel::switchd::import>;import<Devel::switchd>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<IO::Handle::DESTROY>;sub<IO::Handle::DESTROY>;$/,
     'Got debugging output: 1');
     $r = runperl(
 		 switches => [ '-Ilib', '-f', '-d:switchd=a,42' ],
@@ -44,7 +44,7 @@ qr/^sub<Devel::switchd::import>;import<Devel::switchd>;DB<main,$::tempfile_regex
 		 args => ['4'],
 		);
     like($r,
-qr/^sub<Devel::switchd::import>;import<Devel::switchd a 42>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;$/,
+qr/^sub<Devel::switchd::import>;import<Devel::switchd a 42>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<IO::Handle::DESTROY>;sub<IO::Handle::DESTROY>;$/,
     'Got debugging output: 2');
     $r = runperl(
 		 switches => [ '-Ilib', '-f', '-d:-switchd=a,42' ],
@@ -52,7 +52,7 @@ qr/^sub<Devel::switchd::import>;import<Devel::switchd a 42>;DB<main,$::tempfile_
 		 args => ['4'],
 		);
     like($r,
-qr/^sub<Devel::switchd::unimport>;unimport<Devel::switchd a 42>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;$/,
+qr/^sub<Devel::switchd::unimport>;unimport<Devel::switchd a 42>;DB<main,$::tempfile_regexp,9>;sub<Foo::foo>;DB<Foo,$::tempfile_regexp,5>;DB<Foo,$::tempfile_regexp,6>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<Bar::bar>;DB<Bar,$::tempfile_regexp,2>;sub<IO::Handle::DESTROY>;sub<IO::Handle::DESTROY>;$/,
     'Got debugging output: 3');
 }
 
diff --git a/universal.c b/universal.c
index a72c072..8155756 100644
--- a/universal.c
+++ b/universal.c
@@ -1313,6 +1313,52 @@ XS(XS_re_regexp_pattern)
     /* NOT-REACHED */
 }
 
+XS(XS_IO_Handle_DESTROY)
+{
+    dVAR;
+    dXSARGS;
+    SV *arg;
+    GV *gv = NULL;
+    IO *io = NULL;
+
+    if (!items) XSRETURN(0);
+    arg = ST(0);
+    SvGETMAGIC(arg);
+    if (SvROK(arg)) arg = SvRV(arg);
+    if (isGV(arg)) {
+        gv = (GV *)arg;
+        io = GvIO(gv);
+    }
+    else if (SvTYPE(arg) == SVt_PVIO)
+        io = (IO *)arg;
+
+    if (io && IoIFP(io) && IoIFP(io) != PerlIO_stdin() &&
+                           IoIFP(io) != PerlIO_stdout() &&
+                           IoIFP(io) != PerlIO_stderr() &&
+                           !(IoFLAGS(io) & IOf_FAKE_DIRP)) {
+        const bool success = io_close(io, FALSE);
+        if (success) {
+#ifdef PERL_NOISY_IO_DESTROY
+            if (gv)
+                Perl_ck_warner_d(aTHX_ packWARN(WARN_IO),
+                                    "Handle %"HEKf" implicitly closed",
+                                     GvNAME_HEK(gv));
+            else Perl_ck_warner_d(aTHX_ packWARN(WARN_IO),
+                                     "Handle implicitly closed");
+#endif
+        }
+        else {
+            if (gv)
+                Perl_ck_warner_d(aTHX_ packWARN(WARN_IO),
+                                      "Error closing handle %"HEKf": %"SVf,
+                                       GvNAME_HEK(gv), get_sv("!",GV_ADD));
+            else Perl_ck_warner_d(aTHX_ packWARN(WARN_IO),
+                                       "Error closing handle: %"SVf,
+                                        get_sv("!",GV_ADD));
+        }
+    }
+    XSRETURN(0);
+}
 struct xsub_details {
     const char *name;
     XSUBADDR_t xsub;
@@ -1369,6 +1415,7 @@ const struct xsub_details details[] = {
     {"re::regnames", XS_re_regnames, ";$"},
     {"re::regnames_count", XS_re_regnames_count, ""},
     {"re::regexp_pattern", XS_re_regexp_pattern, "$"},
+    {"IO::Handle::DESTROY", XS_IO_Handle_DESTROY, NULL},
 };
 
 void
@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2013

From @cpansprout

On Thu Jul 04 16​:19​:27 2013, olga.kryzhanovska@​gmail.com wrote​:

EINTR on close() can happen easily. The ksh93 developers learned that
the hard way (and fixed it for ksh93 version 'u') that busy scripts
may leak file descriptors, if they wiggle a lot with fds.
SA_RESTART is only of little help, as for example Solaris had bugs in
the restart code on NFS file systems, prior to Solaris 11. Other
Unix/Linux variants had similar issues. Granted, the work around is to
require a recent OS patch set, but now that Oracle charges top money
for such patches the fixes may not be wide spread.

Thank you for the explanation. I hope someone will find it useful.
Unfortunately, I know sufficiently little about I/O to have little idea
what you are talking about. :-)-​: (both faces at once!) (See my other
message, where I ask for help finishing the work this ticket entails.)

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Jul 5, 2013

From @Leont

On Fri, Jul 5, 2013 at 1​:18 AM, олÑ�га кÑ�Ñ�жановÑ�каÑ�
<olga.kryzhanovska@​gmail.com> wrote​:

EINTR on close() can happen easily. The ksh93 developers learned that
the hard way (and fixed it for ksh93 version 'u') that busy scripts
may leak file descriptors, if they wiggle a lot with fds.

Fortunately, EINTR is easily handled by retrying.

SA_RESTART is only of little help, as for example Solaris had bugs in
the restart code on NFS file systems, prior to Solaris 11. Other
Unix/Linux variants had similar issues. Granted, the work around is to
require a recent OS patch set, but now that Oracle charges top money
for such patches the fixes may not be wide spread.

SA_RESTART doesn't work well with our delayed signals. I wouldn't
recommend that approach in Perl in the first place. Automatic restart
is one of those things that makes easy things easier but hard things
more difficult.

Leon

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Sep 14, 2014

From @cpansprout

On Fri Aug 01 11​:02​:42 2008, tom christiansen wrote​:

Furthermore,
I also believe all *IMPLICIT* closes that fail need to generate a
mandatory io warning,

I trying a stab at this once more, but I have run into a snag. I have created a very small disk image and I am trying to write a file to it. But notice how close() does not set $!​:

$ ./perl -Ilib -e 'open fh, ">/Volumes/Disk Image/foo"; print fh "x"x1000, "\n" for 1..50; close fh or die "error closing​: $!"'
error closing​: No space left on device at -e line 1.

$ ./perl -Ilib -e 'open fh, ">/Volumes/Disk Image/foo"; print fh "x"x1000, "\n" for 1..50; unlink "oenthueonth"; close fh or die "error closing​: $!"'
error closing​: No such file or directory at -e line 1.

That last error has nothing to do with the false return value from close(). It is from an unrelated system call.

So if I change perl to emit a default warning when an implicit close fails, then it cannot provide any real diagnostics as to why the handle failed. My attempts at adding it give this​:

$ ./perl -Ilib -e 'open fh, ">/Volumes/Disk Image/foo"; print fh "x"x1000, "\n" for 1..50; undef *fh'
Error closing handle​: No space left on device at -e line 1.

Thatâ��s fine, but without the explicit undef​:

$ ./perl -Ilib -e 'open fh, ">/Volumes/Disk Image/foo"; print fh "x"x1000, "\n" for 1..50; undef *fh'
Error closing handle​: Invalid argument during global destruction.

because at some point errno changed from 28 to 22 when I wasn�t looking.

Omitting errno is very unhelpful​:

Error closing handle during global destruction.

Would it be possible to modify handles to store the errno value instead of simply an error flag? Then close() could set it, and what people have been recommending for years (close or die $!) will start to do what people expect more reliably than before.

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Sep 15, 2014

From tchrist@perl.com

"Father Chrysostomos via RT" <perlbug-followup@​perl.org> wrote
  on Sun, 14 Sep 2014 14​:47​:03 PDT​:

Would it be possible to modify handles to store the errno value
instead of simply an error flag? Then close() could set it, and what
people have been recommending for years (close or die $!) will start
to do what people expect more reliably than before.

I hope so. Whether you justify it with DWIM or by calling not doing it a
violation of the principle of least surprise, it seems worthy of serious
consideration.

--tom

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Sep 18, 2014

From @cpansprout

On Mon Sep 15 12​:35​:42 2014, tom christiansen wrote​:

"Father Chrysostomos via RT" <perlbug-followup@​perl.org> wrote
on Sun, 14 Sep 2014 14​:47​:03 PDT​:

Would it be possible to modify handles to store the errno value
instead of simply an error flag? Then close() could set it, and what
people have been recommending for years (close or die $!) will start
to do what people expect more reliably than before.

I hope so. Whether you justify it with DWIM or by calling not doing it a
violation of the principle of least surprise, it seems worthy of serious
consideration.

I have implemented this on the sprout/destroio branch. I have tested it manually on Mac OS X, but the VMS and Windows code is completely untested. I see no way to automate a test for this.

I would appreciate it if people with VMS and Windows could try this out, using a small disk image (or floppy disk if your computer still takes them), and see whether things work as expected.

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Sep 21, 2014

From @cpansprout

On Thu Sep 18 00​:08​:39 2014, sprout wrote​:

On Mon Sep 15 12​:35​:42 2014, tom christiansen wrote​:

"Father Chrysostomos via RT" <perlbug-followup@​perl.org> wrote
on Sun, 14 Sep 2014 14​:47​:03 PDT​:

Would it be possible to modify handles to store the errno value
instead of simply an error flag? Then close() could set it, and
what
people have been recommending for years (close or die $!) will
start
to do what people expect more reliably than before.

I hope so. Whether you justify it with DWIM or by calling not doing
it a
violation of the principle of least surprise, it seems worthy of
serious
consideration.

I have implemented this on the sprout/destroio branch. I have tested
it manually on Mac OS X, but the VMS and Windows code is completely
untested. I see no way to automate a test for this.

I would appreciate it if people with VMS and Windows could try this
out, using a small disk image (or floppy disk if your computer still
takes them), and see whether things work as expected.

The Windows smokes from the smoke-me/destroio branch show that it won�t even compile there. I�ve not been able to figure out why. Any help would be appreciated.

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Nov 3, 2014

From @cpansprout

As of dc0c4db, close() restores the value of $! relevant to the handle and implicitly closed handles emit a warning.

The one remaining issue in this ticket is that STDOUT is not closed automatically on program exit. (I�m not sure I want to tackle it. This stuff is *hard*.)

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 7, 2015

From @jkeenan

On Sun Nov 02 20​:14​:13 2014, sprout wrote​:

As of dc0c4db, close() restores the value of $! relevant to the handle
and implicitly closed handles emit a warning.

The one remaining issue in this ticket is that STDOUT is not closed
automatically on program exit. (I�m not sure I want to tackle it.
This stuff is *hard*.)

Father C​: This ticket is marked as a blocker for 5.22.0. Given the age of the ticket, it's not a regression from 5.20. So should we still mark it as a blocker for 5.22?

Thank you very much.

--
James E Keenan (jkeenan@​cpan.org)

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 18, 2015

From @steve-m-hay

Just noting here (since I don't see the commit ID mentioned anywhere) that http​://perl5.git.perl.org/perl.git/commitdiff/96d7c88819733eaaba892177a967d9e898b2b924 committed the major part of the work.

However, the comment at https://rt-archive.perl.org/perl5/Ticket/Display.html?id=57512#txn-1316805 notes that one further piece of work is remaining, so the ticket should not be closed yet.

Whether it's really a blocker for 5.22, however, is debatable, as James said, although tchrist made it clear in his original report that the problem in question -- not closing STDOUT on program exit -- is one of the most important cases to cover.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 18, 2015

From [Unknown Contact. See original ticket]

Just noting here (since I don't see the commit ID mentioned anywhere) that http​://perl5.git.perl.org/perl.git/commitdiff/96d7c88819733eaaba892177a967d9e898b2b924 committed the major part of the work.

However, the comment at https://rt-archive.perl.org/perl5/Ticket/Display.html?id=57512#txn-1316805 notes that one further piece of work is remaining, so the ticket should not be closed yet.

Whether it's really a blocker for 5.22, however, is debatable, as James said, although tchrist made it clear in his original report that the problem in question -- not closing STDOUT on program exit -- is one of the most important cases to cover.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 22, 2015

From @rjbs

I've updated this ticket to block 5.24.0 instead.

*plenty* of time!

--
rjbs

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 25, 2015

From @cpansprout

On Sun Mar 22 15​:55​:27 2015, rjbs wrote​:

I've updated this ticket to block 5.24.0 instead.

*plenty* of time!

Not necessarily. I only worked on this bug because you said you wanted it fixed by 5.?? (don�t remember) because Tom Christiansen said convincingly that it was important.

I am not comfortable doing anything really regarding I/O, and there was such a paucity of feedback, that I still do not know whether my implementation was correct. (You could say that people said they wanted this fixed but did not actually want it fixed, since they were not willing to put forth any effort, even as little as feedback and advice.)

So, until some other victim steps forward, this ticket is effectively stalled.

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 26, 2015

From @rjbs

* Father Chrysostomos via RT <perlbug-followup@​perl.org> [2015-03-24T23​:37​:26]

On Sun Mar 22 15​:55​:27 2015, rjbs wrote​:

I've updated this ticket to block 5.24.0 instead.

*plenty* of time!

Not necessarily.

You missed my \N{INVISIBLE WINKING FACE}.

--
rjbs

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Nov 5, 2015

From @tonycoz

On Tue Mar 24 20​:37​:25 2015, sprout wrote​:

So, until some other victim steps forward, this ticket is effectively
stalled.

How about the attached?

Testing the stdout flush issue would require looking for /dev/full, or something similar.

No idea for a generic test for the in-place edit output failures, I used an almost full test disk.

Tony

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Nov 5, 2015

From @tonycoz

0001-perl-57512-report-an-error-and-fail-if-we-can-t-flus.patch
From 7239299e6f7a645547d83946e6a9ff27701cefbb Mon Sep 17 00:00:00 2001
From: Tony Cook <tony@develop-help.com>
Date: Wed, 4 Nov 2015 16:52:37 +1100
Subject: [perl #57512] report an error and fail if we can't flush stdout

---
 perl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/perl.c b/perl.c
index b64975b..9063645 100644
--- a/perl.c
+++ b/perl.c
@@ -584,6 +584,19 @@ perl_destruct(pTHXx)
     assert(PL_scopestack_ix == 0);
 
     /* Need to flush since END blocks can produce output */
+    /* flush stdout separately, since we can identify it */
+#ifdef USE_PERLIO
+    {
+        PerlIO *stdo = PerlIO_stdout();
+        if (*stdo && PerlIO_flush(stdo)) {
+            PerlIO_restore_errno(stdo);
+            PerlIO_printf(PerlIO_stderr(), "Unable to flush stdout: %s",
+                          Strerror(errno));
+            if (!STATUS_UNIX)
+                STATUS_ALL_FAILURE;
+        }
+    }
+#endif
     my_fflush_all();
 
 #ifdef PERL_TRACE_OPS
-- 
2.1.4

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Nov 5, 2015

From @tonycoz

0002-perl-57512-croak-on-failure-to-close-an-in-place-edi.patch
From 5435e20ac24d786e8d109dde8a073112ff8211ce Mon Sep 17 00:00:00 2001
From: Tony Cook <tony@develop-help.com>
Date: Thu, 5 Nov 2015 15:06:00 +1100
Subject: [perl #57512] croak on failure to close an in-place edit output file

---
 doio.c           | 23 ++++++++++++++++++++++-
 pod/perldiag.pod |  5 +++++
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/doio.c b/doio.c
index d95ad9c..6704862 100644
--- a/doio.c
+++ b/doio.c
@@ -808,9 +808,13 @@ PerlIO *
 Perl_nextargv(pTHX_ GV *gv, bool nomagicopen)
 {
     IO * const io = GvIOp(gv);
+    SV *const old_out_name = PL_inplace ? newSVsv(GvSV(gv)) : NULL;
 
     PERL_ARGS_ASSERT_NEXTARGV;
 
+    if (old_out_name)
+        SAVEFREESV(old_out_name);
+
     if (!PL_argvoutgv)
 	PL_argvoutgv = gv_fetchpvs("ARGVOUT", GV_ADD|GV_NOTQUAL, SVt_PVIO);
     if (io && (IoFLAGS(io) & (IOf_ARGV|IOf_START)) == (IOf_ARGV|IOf_START)) {
@@ -852,6 +856,13 @@ Perl_nextargv(pTHX_ GV *gv, bool nomagicopen)
             }
         }
         else {
+            {
+                IO * const io = GvIOp(PL_argvoutgv);
+                if (io && IoIFP(io) && old_out_name && !io_close(io, PL_argvoutgv, FALSE, FALSE)) {
+                    Perl_croak(aTHX_ "Failed to close in-place edit file %"SVf": %s\n",
+                               old_out_name, Strerror(errno));
+                }
+            }
             /* This very long block ends with return IoIFP(GvIOp(gv));
                Both this block and the block above fall through on open
                failure to the warning code, and then the while loop above tries
@@ -1015,7 +1026,17 @@ Perl_nextargv(pTHX_ GV *gv, bool nomagicopen)
     if (io && (IoFLAGS(io) & IOf_ARGV))
 	IoFLAGS(io) |= IOf_START;
     if (PL_inplace) {
-	(void)do_close(PL_argvoutgv,FALSE);
+        if (old_out_name) {
+            IO * const io = GvIOp(PL_argvoutgv);
+            if (io && IoIFP(io) && !io_close(io, PL_argvoutgv, FALSE, FALSE)) {
+                Perl_croak(aTHX_ "Failed to close in-place edit file %"SVf": %s\n",
+                           old_out_name, Strerror(errno));
+            }
+        }
+        else {
+            /* maybe this is no longer wanted */
+            (void)do_close(PL_argvoutgv,FALSE);
+        }
 	if (io && (IoFLAGS(io) & IOf_ARGV)
 	    && PL_argvout_stack && AvFILLp(PL_argvout_stack) >= 0)
 	{
diff --git a/pod/perldiag.pod b/pod/perldiag.pod
index 5111410..d7c6f8c 100644
--- a/pod/perldiag.pod
+++ b/pod/perldiag.pod
@@ -2167,6 +2167,11 @@ Check the #! line, or manually feed your script into Perl yourself.
 CHECK, INIT, or END subroutine.  Processing of the remainder of the
 queue of such routines has been prematurely ended.
 
+=item Failed to close in-place edit file %s: %s
+
+(F) Closing an output file from in-place editing, as with the C<-i>
+command-line switch, failed.
+
 =item False [] range "%s" in regex; marked by S<<-- HERE> in m/%s/
 
 (W regexp)(F) A character class range must start and end at a literal
-- 
2.1.4

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 17, 2016

From @rjbs

I have applied TonyC's patch tentatively.

I have also moved this ticket to the v5.25.1 blockers for future consideration of final changes (like making it possible to fatalize just the implicit close failure warnings).

--
rjbs

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 17, 2016

From @khwilliamson

Well, it's almost time for 5.25.1, which this ticket blocks. The final message says it is so that final considerations can be done now. What might these be?

--
Karl Williamson

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 17, 2016

From @rjbs

On Tue May 17 14​:45​:53 2016, khw wrote​:

Well, it's almost time for 5.25.1, which this ticket blocks. The
final message says it is so that final considerations can be done now.
What might these be?

The only one that comes to mind *for me* is the one I referenced above. To elaborate, note these warnings from perldiag​:

  Warning​: unable to close filehandle properly​: %s
  Warning​: unable to close filehandle %s properly​: %s

These warnings are in category "io". They cannot be fatalized without fatalizing quite a few other warnings. If I could "use warnings FATAL => 'io​::implicitclose'" (or something), I'd use that instead of "close $f or die".

Clearly this does not block 5.25.1 from being released, but it would be nice to have.

--
rjbs

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 17, 2016

From @cpansprout

On Tue May 17 15​:06​:24 2016, rjbs wrote​:

On Tue May 17 14​:45​:53 2016, khw wrote​:

Well, it's almost time for 5.25.1, which this ticket blocks. The
final message says it is so that final considerations can be done
now.
What might these be?

The only one that comes to mind *for me* is the one I referenced
above. To elaborate, note these warnings from perldiag​:

Warning​: unable to close filehandle properly​: %s
Warning​: unable to close filehandle %s properly​: %s

These warnings are in category "io". They cannot be fatalized without
fatalizing quite a few other warnings. If I could "use warnings FATAL
=> 'io​::implicitclose'" (or something), I'd use that instead of "close
$f or die".

I think I mentioned this before. You cannot really fatalize anything that happens automatically when exiting a scope. Any time that we die on scope exit, we end up with weird bugs, such as #105916. See also the double eval in ext/PerlIO-encoding/t/encoding.t and bug #115692.

This type of problem comes up every so often, and there is no clear solution for it.

Conceptually it doesn�t even make sense to me. If a handle closes because of a die() elsewhere, and the handle closure dies, too, which error wins?

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 18, 2016

From @iabyn

On Tue, May 17, 2016 at 04​:00​:29PM -0700, Father Chrysostomos via RT wrote​:

I think I mentioned this before. You cannot really fatalize anything that happens automatically when exiting a scope. Any time that we die on scope exit, we end up with weird bugs, such as #105916.

Note that your sample test case in that ticket passes under 5.24.0,
due (I would guess) to my context work and the re-ordering of
when leave_scope() is called during context unwind.

--
Dave's first rule of Opera​:
If something needs saying, say it​: don't warble it.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 18, 2016

From @cpansprout

On Wed May 18 02​:09​:55 2016, davem wrote​:

On Tue, May 17, 2016 at 04​:00​:29PM -0700, Father Chrysostomos via RT
wrote​:

I think I mentioned this before. You cannot really fatalize anything
that happens automatically when exiting a scope. Any time that we
die on scope exit, we end up with weird bugs, such as #105916.

Note that your sample test case in that ticket passes under 5.24.0,
due (I would guess) to my context work and the re-ordering of
when leave_scope() is called during context unwind.

Oh wow! Does that mean these sorts of things are fixable? I still don�t know which die() should win, though.

--

Father Chrysostomos

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented May 23, 2016

From @iabyn

On Wed, May 18, 2016 at 06​:25​:42AM -0700, Father Chrysostomos via RT wrote​:

On Wed May 18 02​:09​:55 2016, davem wrote​:

On Tue, May 17, 2016 at 04​:00​:29PM -0700, Father Chrysostomos via RT
wrote​:

I think I mentioned this before. You cannot really fatalize anything
that happens automatically when exiting a scope. Any time that we
die on scope exit, we end up with weird bugs, such as #105916.

Note that your sample test case in that ticket passes under 5.24.0,
due (I would guess) to my context work and the re-ordering of
when leave_scope() is called during context unwind.

Oh wow! Does that mean these sorts of things are fixable?

One of my explicit goals for the context work was to make these sorts of
things either fixed or at least fixable.

All scope exits, whether normally via pp_leavefoo() or abnormally via
Perl_dounwind(), now consistently do (in this order)​:

  cx = &cxstack[cxstack_ix];
  LEAVE_SCOPE(cx->blk_oldsaveix);
  cx_popfoo(cx); /* was POPFOO */
  cx_popblock(cx); /* was POPBLOCK */
  cxstack_ix--;

Note that that whole chunk of code is idempotent, so that if an exception
is raised, either within leave_scope() or within cx_popfoo(), then
dounwind() will process that same context stack entry again, but nothing
Bad will happen.

In particular, all the cx_popfoo()s' have been explicitly written to be
idempotent​: for example, cx_popsub() does​:

  cv = cx->blk_sub.cv;
  cx->blk_sub.cv = NULL;
  SvREFCNT_dec(cv);

Currently the only difference between pp_leavefoo() and Perl_dounwind
is that the latter only does the cx_popblock(cx) (which restores
a bunch of PL_ variables) for the last popped context, not for each one.

I still don�t know which die() should win, though.

It should behave the same way as a die in a STORE called during
scope exit that's undoing a 'local $tied = 1' for example.

Which isn't actually answering your question ;-)

--
In England there is a special word which means the last sunshine
of the summer. That word is "spring".

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 20, 2017

From @iabyn

On Mon, May 23, 2016 at 03​:37​:43PM +0100, Dave Mitchell wrote​:
[stuff related to the final issues in a nine-year old ticket about
handling implicit closes with errors]

Hi Karl!

On 27 Jan you added this ticket to the 5.26.0 blockers. Was that
intentional? I can't see any reason why it should block.

--
Spock (or Data) is fired from his high-ranking position for not being able
to understand the most basic nuances of about one in three sentences that
anyone says to him.
  -- Things That Never Happen in "Star Trek" #19

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 20, 2017

From @khwilliamson

On 03/20/2017 09​:31 AM, Dave Mitchell wrote​:

On Mon, May 23, 2016 at 03​:37​:43PM +0100, Dave Mitchell wrote​:
[stuff related to the final issues in a nine-year old ticket about
handling implicit closes with errors]

Hi Karl!

On 27 Jan you added this ticket to the 5.26.0 blockers. Was that
intentional? I can't see any reason why it should block.

See https://rt-archive.perl.org/perl5/Ticket/Display.html?id=127731#txn-1443374

It was a 5.25 blocker; I moved it to instead block 5.26.0 so that it
could be decided now.

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 20, 2017

From @iabyn

On Mon, Mar 20, 2017 at 10​:28​:42AM -0600, Karl Williamson wrote​:

On 03/20/2017 09​:31 AM, Dave Mitchell wrote​:

On Mon, May 23, 2016 at 03​:37​:43PM +0100, Dave Mitchell wrote​:
[stuff related to the final issues in a nine-year old ticket about
handling implicit closes with errors]

Hi Karl!

On 27 Jan you added this ticket to the 5.26.0 blockers. Was that
intentional? I can't see any reason why it should block.

See https://rt-archive.perl.org/perl5/Ticket/Display.html?id=127731#txn-1443374

It was a 5.25 blocker; I moved it to instead block 5.26.0 so that it could
be decided now.

Ah thanks.

In that case I propose that this ticket not be a blocker for 5.26.0, nor
a blocker for any future release.

--
"Strange women lying in ponds distributing swords is no basis for a system
of government. Supreme executive power derives from a mandate from the
masses, not from some farcical aquatic ceremony."
  -- Dennis, "Monty Python and the Holy Grail"

@p5pRT
Copy link
Collaborator Author

@p5pRT p5pRT commented Mar 21, 2017

From @xsawyerx

On 03/20/2017 06​:01 PM, Dave Mitchell wrote​:

On Mon, Mar 20, 2017 at 10​:28​:42AM -0600, Karl Williamson wrote​:

On 03/20/2017 09​:31 AM, Dave Mitchell wrote​:

On Mon, May 23, 2016 at 03​:37​:43PM +0100, Dave Mitchell wrote​:
[stuff related to the final issues in a nine-year old ticket about
handling implicit closes with errors]

Hi Karl!

On 27 Jan you added this ticket to the 5.26.0 blockers. Was that
intentional? I can't see any reason why it should block.

See https://rt-archive.perl.org/perl5/Ticket/Display.html?id=127731#txn-1443374

It was a 5.25 blocker; I moved it to instead block 5.26.0 so that it could
be decided now.
Ah thanks.

In that case I propose that this ticket not be a blocker for 5.26.0, nor
a blocker for any future release.

Agreed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant
You can’t perform that action at this time.