Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

-pi doesn't sanely handle write errors #328

Closed
p5pRT opened this issue Aug 3, 1999 · 44 comments

Comments

@p5pRT
Copy link

commented Aug 3, 1999

Migrated from rt.perl.org#1154 (status was 'resolved')

Searchable as RT1154$

@p5pRT

This comment has been minimized.

Copy link
Author

commented Aug 3, 1999

From morty@sled.gsfc.nasa.gov

perl 5.00502 and 5.00503 (probably earlier as well) don't check for write
errors when they do -pi. For example​:

morty@​frakir​:/spare/morty 12​:16​:11 1513$ df -k .
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/sda7 513147 513145 2 100% /spare
morty@​frakir​:/spare/morty 12​:16​:14 1514$ ls -l bar*
-rw-r--r-- 1 morty wheel 6923 Aug 2 19​:03 bar
morty@​frakir​:/spare/morty 12​:16​:24 1515$ perl -pi~ -e '' bar
morty@​frakir​:/spare/morty 12​:16​:28 1516$ echo $?
0
morty@​frakir​:/spare/morty 12​:16​:31 1517$ ls -l bar*
-rw-r--r-- 1 morty wheel 2048 Aug 3 12​:16 bar
-rw-r--r-- 1 morty wheel 6923 Aug 2 19​:03 bar~

IMHO, better behavior would be for perl to put the file back the way it was
if it encounters a write error.

I haven't tried the latest development version; my apologies if this is a
known bug and/or has already been fixed and/or is documented somewhere i
didn't think to look.

Perl Info


Site configuration information for perl 5.00503:

Configured by morty at Mon Aug  2 20:33:38 EDT 1999.

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
    osname=linux, osvers=2.0.36, archname=i686-linux
    uname='linux frakir 2.0.36 #1 tue dec 29 13:11:13 est 1998 i686 unknown '
    hint=recommended, useposix=true, d_sigaction=define
    usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
    cc='cc', optimize='-O2', gccversion=2.7.2.3
    cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
    ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
    stdchar='char', d_stdstdio=define, usevfork=false
    intsize=4, longsize=4, ptrsize=4, doublesize=8
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
    alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
    ld='cc', ldflags =' -L/usr/local/lib'
    libpth=/usr/local/lib /lib /usr/lib
    libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lc -lposix -lcrypt
    libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
    cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'

Locally applied patches:
    


@INC for perl 5.00503:
    /home/frakir/morty/lib/perl5
    /usr/local/lib/perl5/5.00503/i686-linux
    /usr/local/lib/perl5/5.00503
    /usr/local/lib/perl5/site_perl/5.005/i686-linux
    /usr/local/lib/perl5/site_perl/5.005
    .


Environment for perl 5.00503:
    HOME=/home/frakir/morty
    LANG (unset)
    LANGUAGE (unset)
    LD_LIBRARY_PATH (unset)
    LOGDIR (unset)
    PATH=/usr/local/bin:/usr/local/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/sbin:/usr/local/sbin:/sbin:/opt/sybase/bin:/home/frakir/morty/bin:/usr/local/pgsql/bin:/usr/local/ssl/bin
    PERL5LIB=/home/frakir/morty/lib/perl5
    PERL_BADLANG (unset)
    SHELL=/bin/bash

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

From @smpeters

[morty@​sled.gsfc.nasa.gov - Tue Aug 03 05​:24​:27 1999]​:

-----------------------------------------------------------------
[Please enter your report here]

perl 5.00502 and 5.00503 (probably earlier as well) don't check for
write
errors when they do -pi. For example​:

morty@​frakir​:/spare/morty 12​:16​:11 1513$ df -k .
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/sda7 513147 513145 2 100% /spare
morty@​frakir​:/spare/morty 12​:16​:14 1514$ ls -l bar*
-rw-r--r-- 1 morty wheel 6923 Aug 2 19​:03 bar
morty@​frakir​:/spare/morty 12​:16​:24 1515$ perl -pi~ -e '' bar
morty@​frakir​:/spare/morty 12​:16​:28 1516$ echo $?
0
morty@​frakir​:/spare/morty 12​:16​:31 1517$ ls -l bar*
-rw-r--r-- 1 morty wheel 2048 Aug 3 12​:16 bar
-rw-r--r-- 1 morty wheel 6923 Aug 2 19​:03 bar~

IMHO, better behavior would be for perl to put the file back the way
it was
if it encounters a write error.

I haven't tried the latest development version; my apologies if this
is a
known bug and/or has already been fixed and/or is documented somewhere
i
didn't think to look.

steve@​kirk pi_test $ perl -pi- -e'exit' test
Can't rename test to test-​: Permission denied, skipping file.

It seems to have been fixed somewhere prior to Perl 5.8.5.

Steve Peters

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

@smpeters - Status changed from 'open' to 'resolved'

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

From morty@sled.gsfc.nasa.gov

On Fri, Nov 12, 2004 at 03​:46​:36AM -0000, Steve Peters via RT wrote​:

steve@​kirk pi_test $ perl -pi- -e'exit' test
Can't rename test to test-​: Permission denied, skipping file.
It seems to have been fixed somewhere prior to Perl 5.8.5.

That didn't duplicate the steps that led to the original problem. The
problem is not when the initial rename fails or file create fails,
it's when a subsequent write fails.

I just tried it under 5.8.5, and it still happens. Example​:

root@​red-sonja​:/mnt 05​:30​:13 1061# dd if=/dev/zero of=bar bs=1024
count=700
700+0 records in
700+0 records out
root@​red-sonja​:/mnt 05​:30​:46 1062# df -k .
Filesystem 1k-blocks Used Available Use% Mounted on
/root/foo 1003 717 235 75% /mnt
root@​red-sonja​:/mnt 05​:30​:50 1063# ls -l
total 716
-rw-r--r-- 1 root root 716800 Nov 12 05​:30 bar
drwxr-xr-x 2 root root 12288 Nov 12 05​:29 lost+found
root@​red-sonja​:/mnt 05​:30​:55 1064# perl -pi~ -e 1 bar
root@​red-sonja​:/mnt 05​:31​:12 1065# echo $?
0
root@​red-sonja​:/mnt 05​:31​:16 1066# ls -l
total 999
-rw-r--r-- 1 root root 286720 Nov 12 05​:31 bar
-rw-r--r-- 1 root root 716800 Nov 12 05​:30 bar~
drwxr-xr-x 2 root root 12288 Nov 12 05​:29 lost+found

- Morty

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

From morty@frakir.org

Please reopen. Thanks!

A complete demo script is attached.

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 12, 2004

From morty@sled.gsfc.nasa.gov

It occurs to me that I never included a demo of how to create an
artificial space problem for testing purposes. Here is such a
script; requires root on a modern linux system.

#!/bin/sh

# script to demo problem of perl not detecting write errors with -pi
# requires root on a linux system

IMG=/root/floppy.img
IMG_DIR=/root
IMG_SIZE=1440
BS=1024
SOURCE=/dev/zero
MOUNTPOINT=/mnt
MKFS_ARGS="-t ext2"
MOUNT_ARGS="-o loop"
FILE=bar
FILE_SIZE=900

cd / &&
mkdir -p $IMG_DIR &&
dd if=$SOURCE of=$IMG bs=$BS count=$IMG_SIZE &&
mkfs $MKFS_ARGS $IMG </dev/null &&
mount $MOUNT_ARGS $IMG $MOUNTPOINT &&
cd $MOUNTPOINT &&
dd if=$SOURCE of=$FILE bs=$BS count=$FILE_SIZE &&
echo before running perl​: && df -k . && ls -l bar* &&
echo Running perl. . . &&
perl -pi~ -e 1 $FILE

echo Done. Status is​: $? && df -k . && ls -l bar*

# clean up regardless of status
cd /; umount $MOUNTPOINT; rm $IMG

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 15, 2004

@smpeters - Status changed from 'resolved' to 'open'

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 17, 2008

From @chipdude

A first step in handling write errors would be dying if either print or
close returns false. This would be a significant change from the old
ways, but I think it's OK ... data loss is one of the primary evils,
after all.

As for restoring the partially read file back to its original name on
error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

Finally, adding death to the behavior of existing code that has its own
read/print loop is far more questionable, and I'd rather not do it. In
other words, if you write this you should get death on disk full​:

  perl -pi~ 's/foo/bar/' *.c

But if you write this you should _not_ get automatic death​:

  #!/usr/bin/perl
  $^I = '~';
  while (<>) { ....; print }

Comments?

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 17, 2008

From @chipdude

{{ I filed this comment in RT and asked it to copy p5p, but over an hour
  later I don't see it. I wonder what's up with that? Meanwhile... }}

A first step in handling write errors in -pi would be dying if either print
or close returns false. This would be a significant change from the old
ways, but I think it's OK ... data loss is one of the primary evils, after
all.

As for restoring the partially read file back to its original name on
error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

Finally, adding death to the behavior of existing code that has its own
read/print loop is far more questionable, and I'd rather not do it. In
other words, if you write this you should get death on disk full​:

  perl -pi~ 's/foo/bar/' *.c

But if you write this you should _not_ get automatic death​:

  #!/usr/bin/perl
  $^I = '~';
  while (<>) { ....; print }

Comments?
--
Chip Salzenberg <chip@​pobox.com>

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 18, 2008

From Mordechai.T.Abzug@nasa.gov

On Mon, Nov 17, 2008 at 10​:38​:05AM -0800, Chip Salzenberg via RT wrote​:

A first step in handling write errors would be dying if either print
or close returns false. This would be a significant change from the
old ways, but I think it's OK ... data loss is one of the primary
evils, after all.

Agreed. At the very least, I should get a non-zero exit status.

As for restoring the partially read file back to its original name
on error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

In my use case, I was looking to use Perl to in-place edit system
configs. The processing is minimal. I would much rather that failure
automatically revert the config rather than leave a corrupt config
(neither the old nor the new) in place.

And what about the case where one has an error when using -i without a
backup, i.e. perl -pi -e $something $file? Perl knows that something
has gone horribly wrong, but it doesn't have a name to give a second
file. In that case, I'd say that the correct behavior is to revert
the original file name to the original contents.

Finally, adding death to the behavior of existing code that has its own
read/print loop is far more questionable, and I'd rather not do it. In
other words, if you write this you should get death on disk full​:

perl \-pi~ 's/foo/bar/' \*\.c

But if you write this you should _not_ get automatic death​:

\#\!/usr/bin/perl
$^I = '~';
while \(\<>\) \{ \.\.\.\.; print \}

Comments?

Agreed. The main problem here is that the -p implicit loop doesn't
have error handling. The programmer is relying on Perl's -p to be
properly implemented, so -p should do error handling and related
goodness. If the programmer chooses to do an explicit loop or uses -n
with explicit print, the onus is on the programmer to do error
handling on any print or close calls. [Is there error handling on the
implicit close that is part of <>?]

But what if the programmer's code does throw die while processing <>
with -i, with $^I enabled, or with -n -i? In that case, I would still
expect an atexit handler / END block to clean up the mess in the same
way as for -p -i. [This might be an argument in favor of not trying
to clean up the mess, as you suggested above. It's easy to tell the
programmer "use -i$ext $file", and if there is an error, your original
file will be in ${file}$ext". It's a lot harder to rely on Perl to
correctly determine when an error has occurred while processing $file,
and then make policy decisions on what constitutes safe cleanup.
Also, one can't really rely on Perl being able to safely recover --
what if Perl is running on a network mount, and the actual problem
here is that the network mount failed? What if the system reboots in
the middle of the script run?]

- Morty

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 18, 2008

From ben@morrow.me.uk

Quoth Mordechai.T.Abzug@​nasa.gov (Morty Abzug)​:

On Mon, Nov 17, 2008 at 10​:38​:05AM -0800, Chip Salzenberg via RT wrote​:

As for restoring the partially read file back to its original name
on error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

In my use case, I was looking to use Perl to in-place edit system
configs. The processing is minimal. I would much rather that failure
automatically revert the config rather than leave a corrupt config
(neither the old nor the new) in place.

  perl -pi~ -e'... END { $? and rename "$ARGV~", $ARGV }'

And what about the case where one has an error when using -i without a
backup, i.e. perl -pi -e $something $file? Perl knows that something
has gone horribly wrong, but it doesn't have a name to give a second
file. In that case, I'd say that the correct behavior is to revert
the original file name to the original contents.

But that's not easy, since the original file has been unlinked and Unix
doesn't have a 'flink' syscall. The contents would have to be copied,
which could in turn incur errors (disk full, for instance), which would
then have to have their own handling... Just use a backup if you need to
recover from errors.

Ben

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 18, 2008

From tchrist@perl.com

<<<====================================================>>>

¶ SUMMARY​: Yes, Chip, it's a bug, one I've kvetched
  about for many a year (but see §x below).
  Yet patching's complex, as you'll read anon.

¶ METAPHOR​: There is nothing new under the sun. [Eccl 1​:9]

¶ IN LATIN​: Sicut erat in principio, et nunc, et semper.

<<<====================================================>>>

On Mon, 17 Nov 2008 10​:38​:07 PST, Chip Salzenberg wrote​:

A first step in handling write errors would be dying if either
print or close returns false. This would be a significant
change from the old ways, but I think it's OK ...

Indeed; see §xv below.

data loss is one of the primary evils, after all.

Agreed​: vehemently, vociferously, and vituperatively.

See §xiii below.

As for restoring the partially read file back to its original
name on error​: I actually think that's a bad idea. There may
be a LOT of processing involved, and partial output is
definitely better than no output in some circumstances.

Finally, adding death to the behavior of existing code that has
its own read/print loop is far more questionable, and I'd
rather not do it. In other words, if you write this you should
get death on disk full​:

perl -pi~ 's/foo/bar/' *.c

But if you write this you should _not_ get automatic death​:

#!/usr/bin/perl
$^I = '~';
while (<>) { ....; print }

Comments?

How nice of you to ask! I'm only too glad to provide a few
brief ones--which can be found following my signature. :-)

Chip, this bug is one I've long lamented, and the problem more
general than -i alone, but 'twas there I first felt its bite.

When a filehandle is *implicitly* closed (or even flushed) by
Perl, such as when there is a reopen on a handle that's already
open, when an autovivved *GLOB{IO} object gets DESTROYed (and,
I hope, a local *GLOB), or when the program itself exits, there
is usually no notification of this.

This makes for incorrect programs--and silently lost data!!

If the programmer cannot check, than Perl *must*; and doesn't.

If you look at pp_fork(), you'll see that it does, voidally,

  PERL_FLUSHALL_FOR_CHILD

I presume you recall why we must do that, and may even
recall how Sun's fflush(NULL) would accidentally also
fpurge() input streams, not just fflush() output streams.

The fflush NULL case is here in Perl_PerlIO_flush​:

  /*
  * Is it good API design to do flush-all on NULL, a
  * potentially errorneous input? Maybe some magical value
  * (PerlIO* PERLIO_FLUSH_ALL = (PerlIO*)-1;)? Yes, stdio does
  * similar things on fflush(NULL), but should we be bound by
  * their design decisions? --jhi
  */
  PerlIO **table = &PL_perlio;
  int code = 0;
  while ((f = *table)) {
  int i;
  table = (PerlIO **) (f++);
  for (i = 1; i < PERLIO_TABLE_SIZE; i++) {
  if (*f && PerlIO_flush(f) != 0)
  code = -1;
  f++;
  }
  }
  return code;

But again, people are voiding (erroneously disregarding) it.
Similarly, if you look at pp_flock(), whose issues, Chip, I
believe you will recall, again you see disreputable voiding​:

  if (fp) {
  (void)PerlIO_flush(fp);
  value = (I32)(PerlLIO_flock(PerlIO_fileno(fp), argtype) >= 0);
  }

Meanwhile in Perl_do_openn(), called by pp_open(), you find
many calls that eventually chase down to PerlIO__close, and
these in turn to Perl_PerlIO_flush with a specific handle.

That case is this​:

  const PerlIO_funcs *tab = PerlIOBase(f)->tab;

  if (tab && tab->Flush)
  return (*tab->Flush) (aTHX_ f);

I get lost in the longjump() in S_my_exit_jump, but you can
go to perl_destruct() and find

  /* Need to flush since END blocks can produce output */
  my_fflush_all();

which leads you to the following evil code​:

  if (open_max > 0) {
  long i;
  for (i = 0; i < open_max; i++)
  if (STDIO_STREAM_ARRAY[i]._file >= 0 &&
  STDIO_STREAM_ARRAY[i]._file < open_max &&
  STDIO_STREAM_ARRAY[i]._flag)
  PerlIO_flush(&STDIO_STREAM_ARRAY[i]);
  return 0;
  }

Notice again, we've neglecting reporting a flush failure.

If reading through the code isn't enough (to make your head spin),
I also have direct empirical evidence, well tested, that at program
exit time, the program exits silently and erroneously unerroneously
if the STDOUT flush fails.

I complain about this every 5 or 10 years. The last time was only
this past July. Here is part of it​:

http​://www.gossamer-threads.com/lists/perl/porters/230671?search_string=even%20my%20cat%20is%20smarter%20than;#230671

in which I prove that even my cat is smarter than perl. :-)

Pardon me, while I repeat myself​:

» Which brings me to my final complaint of the day. That code is
» wrong because Perl itself is wrong. No really, it is.

» Watch​:

» % df -h .
» Filesystem Size Used Avail Capacity Mounted on
» /dev/wd0a 124M 117M -5.0K 100% /
»
» % ls -l it*
» -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it
»
» % perl -i.orig -pe 'print "this is more stuff\n"' it
»
» % echo $?
» 0
»
» % ls -l it*
» 0 -rw-r--r-- 1 tchrist wheel 0 Jul 29 15​:05 it
» 0 -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it.orig

» To this day, Perl's implicit closing of files doesn't warn you
» of errors, let alone exit nonzero. This makes it do wrong
» thing and not even tell you it did them wrong. This is a
» *true* problem, because checking for the success of print() is
» neither necessary nor sufficient to detect the success of
» print(). Yes, you read that correctly. It's because of
» buffering, plus the persistence of the err flag on the file
» structure.

» I've never convinced anybody this is important. Since *every*
» program should do this for correctness, it has to be in the run-
» time system to avoid it ever being forgotten. Sure, there's
» stuff like this you can do​:

» END { close(STDOUT) || die "can't close stdout​: $!" }

See §vi below.

» But that sort of thing should happen on all implicitly closed
» things. And it really must. Even IO​::Handle​::close doesn't
» bother. Perl knows what handles it's flushing closing during
» global destruction, at least if you don't just blindly
» fflush(0).

» Watch again, starting from before the bogus, ill-reported
» rename attempt​:

» % df -h .
» Filesystem Size Used Avail Capacity Mounted on
» /dev/wd0a 124M 117M -5.0K 100% /
»
» % ls -l it
» -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it
»
» % perl -e 'print "stuff\n"' >> it; echo $?
» 0
»
» % perl -e 'open(my $fh, ">>it") || die "open $!"; print $fh "stuff\n"; print STDOUT "ok now\n"'
» ok now
»
» % echo $?
» 0
»
» % ls -l it
» -rw-r--r-- 1 tchrist wheel 0 Jul 29 14​:06 it

» This is all incorrect behavior on Perl's part. Even cat knows better!

» % echo foo | cat >> it; echo $?
» cat​: stdout​: No space left on device
» 1

» +-------------------------------------------------------+
» | Notice how even my cat is smarter than your perl!? :( |
» +-------------------------------------------------------+

» What's that about, eh?

» But I've been saying this for years, just like everything else.
» Makes no difference. Depressing, eh?

» BTW, this proves my point about checking for print's status
» being a waste of time, neither necessary nor sufficient to
» catch failed prints!

» % perl -e 'open(my $fh, ">>it") || die "open $!"; print $fh "stuff\n" or die "print $!"; print STDOUT "ok now\n"'; echo exit status was $?
» ok now
» exit status was 0

» And you can't get the danged thing to detect its folly​:

» % perl -WE 'open(my $fh, ">>it") || die "open $!"; say $fh "stuff" or die "
» print $!"; say "ok now"' ; echo exit status was $?
» ok now
» exit status was 0

» I firmly believe that this needs to be a part of *EVERY* Perl
» program, which consequently means it should *not* be there, but
» in the core itself​:

» END { close(STDOUT) || die "can't close stdout​: $!" }

» And I believe this *much* more than some here believe <ARGV>
» needs to implicitly map {"< $_\0"} @​ARGV (since that breaks
» existing working code, whereas mine fixes existing broken
» code). Furthermore, I also believe all *IMPLICIT* closes
» that fail need to generate a mandatory io warning, but that
» of STDOUT should be a fatal. I've believed all this for a
» long time; said it often enough. Anything else is just plain
» wrong behavior. I'm not quite sure whether ARGVOUT failure
» should be a warning or a fatal, but it should be *something*
» suitably noisy.

And then in a grandchild followup, where I wrote​:

»> I am glad you mentioned this, because I also think it's a
»> pretty egregious bug, but I was rather imagining you'd come
»> out to defend it, point out that it has always been that way
»> and why change it now, and anyway is clearly mentioned as a
»> note to an addendum to a particular perlfaq answer. I am glad
»> we agree on this issue at least :-(.

[...]

»> Yup, and also for STDERR (if closing STDERR fails you can
»> attempt to print a warning message to it, which might or might
»> not get through, but you can at least exit with nonzero
»> status).

» Possibly. Probably. Maybe.

»>> Furthermore, I also believe all *IMPLICIT* closes that fail
»>> need to generate a mandatory io warning,

»> This also makes perfect sense.

» Easy for you to say. Others have said otherwise in the past,
» and I've not prevailed. I don't know if those people are
» still alive anymore though, for I don't recall who they were
» nor the thrust of their arguments against such sanity. I
» don't recall ever having been convinced of whatever
» "reasoning" they used, but absence of evidence is not to be
» confused with evidence of absence, so perhaps I was and that
» memory is now hidden from me.

»> Does anyone really object to this?

» They clearly did, or it would have been done. But maybe this
» is one of those rare periods in history where we can fix it
» when the PC police aren't breathing down our throats. One
» shall see.

»> In principle it is a backwards incompatible change (code that
»> previously chugged along without visible problems will now
»> start producing warnings or exiting with nonzero status) but I
»> can't see the point of taking compatibility to such extremes.

» A bug is a bug, and this is. Always has been.

Rafaël then kindly spoke up, saying​:

»>> To this day, Perl's implicit closing of files doesn't warn
»>> you of errors, let alone exit nonzero. This makes it do wrong
»>> thing and not even tell you it did them wrong. This is a
»>> *true* problem, because checking for the success of print()
»>> is neither necessary nor sufficient to detect the success of
»>> print(). Yes, you read that correctly. It's because of
»>> buffering, plus the persistence of the err flag on the file
»>> structure.

»>> I've never convinced anybody this is important.

»> It absolutely is. I had no idea, and as far as I'm
»> concerned it's broken obviously enough that it needs
»> no supporting argument.

» I concur. A patch would help, but I know that PerlIO is not
» the simplest thing to patch, and the problem is probably a
» bit complex now.

Indeed; see §xvi below.

» A proper bug report would help, too, because currently this one
» is deep buried in another thread.

I was glad to see that I'd finally convinced SOMEONE this to be
a genuine problem. And I *think* I even made such a bug report,
but don't recall for certain.

But I know I didn't submit a patch; alas for §ix below!

I did, however, look into the problem, as you see from the C code
cited above, where I found Rafaël to be quite on the mark when he
said that "PerlIO is not the simplest thing to patch." :-(

And as I believe I have shown, it's not just one place that
needs fixing, but many. And they may not require the same
treatment​: see §v below.

I believe that all voided flushes are wrong.

I also believe that the code in Perl_do_openn() that reads

  if (result == EOF && fd > PL_maxsysfd) {
  /* Why is this not Perl_warn*() call ? */
  PerlIO_printf(Perl_error_log,
  "Warning​: unable to close filehandle %s properly.\n",
  GvENAME(gv));
  }

is a Microsoftening of the true etiology behind the underlying
error, which we shall not now ever come to know--and must.

Where now the errno and strerror thereof? "Can't close file"
bereft of the cause is *excruciatingly* frustrating to those
it's perpetrated upon. It is EVIL.

My five rules for "proper" (=good) error messages are​:

  $0​: They name the program generating the message.
  $1​: They name the function that failed.
  $2​: They name the arguments to said function.
  $3​: They name the resulting errno, that is, the
  standard system error message, if appropriate.
  $4​: They go out STDERR, unbuffered, which is fileno 2.

It's hard to call one of those more important than another, but
insofar as I can discern, the "Warning" above (which doesn't seem
to be a proper warning, per the comment), fails at $3, occluding
the true problem from the user in a rather Redmundial lapse;
see §ii below.

--tom

--
§ i Help save the world!

  --Larry Wall in README

§ ii You tell it that it's indicative by appending $!. That's
  why we made $! such a short variable name, after all. :-)

  --Larry Wall in <199709081801.LAA20629@​wall.org>

§ iii And you can still put in all that cruft if you want to.
  You can even force yourself to have to do it. But to me,
  it feels a bit like slavery, so I'm still looking for a
  land flowing with milk and honey, even if there are a few
  giants in it.

  --Larry Wall

§ iv Be consistent.

  --Larry Wall in the perl man page

§ v The problem with being consistent is that there are lots of ways
  to be consistent, and they're all inconsistent with each other.

  --Larry Wall in-ID <20050516005559.GC26184@​wall.org>,
  to the perl6-language mailing-list

§ vi The computer should be doing the hard work.
  That's what it's paid to do, after all.

  --Larry Wall in <199709012312.QAA08121@​wall.org>

§ vii I think I'll side with the pissheads on this one.

  --Larry Wall

§ viii It seems like a sane thing to me,
  but that's a rather low standard.
  --Larry Wall in <20050419150023.GA19507@​wall.org>

§ ix This has been planned for some time. I guess we'll just have
  to find someone with an exceptionally round tuit.

  --Larry Wall in <199709302338.QAA17037@​wall.org>

§ x Er, Tom, I hate to be the one to point this out, but your fix
  list is starting to resemble a feature list. You must be
  human or something.

  --Larry Wall in <199801081824.KAA29602@​wall.org>

§ xi Down that path lies madness. On the other hand, the road to
  hell is paved with melting snowballs.
  --Larry Wall in <1992Jul2.222039.26476@​netlabs.com>

§ xii But maybe we could try to set some slushiness milestones on
  the road to hell freezing over...

  --Larry Wall in <20050314165932.GA12577@​wall.org>

§ xiii I have no opinion on its suitability for any particular task.
  I'm just the language designer--my job is to shoot you in the
  foot and make you think you did it to yourself. :-)

  --Larry Wall in <20050309170804.GA22973@​wall.org>

§ xiv I recommend not remaking my mistakes.
  Please make different mistakes. :-)

  --Larry Wall in <20040317192052.GA10645@​wall.org>

§ xv I suppose one could claim that an undocumented feature
  has no semantics. :-(

  --Larry Wall in <199710290036.QAA01818@​wall.org>

§ xvi It's not designed to make people happy who want to confuse
  those issues. We have macros for that.

  --Larry Wall in <20050419155213.GC19507@​wall.org>

§ xvii If you write something wrong enough, I'll be glad to make up
  a new witticism just for you.

  --Larry Wall in <199702221943.LAA20388@​wall.org>

§xviii ...sometimes collections of stupid utterances can be
  rather clever. If my writings are ever published
  posthumously, they should probably be called "A
  Collection of Stupid Utterances", or some such... :-)

  --Larry Wall in <20050303163144.GA5235@​wall.org>

§ xix : If I don't document something, it's usually either for
  : a good reason, or a bad reason. In this case it's a
  : good reason. :-)
  : --Larry Wall in <1992Jan17.005405.16806@​netlabs.com>

  Yeah, I keep saying that.

  The trouble with being quoted a lot is that it makes
  other people think you're quoting yourself when in fact
  you're merely repeating yourself.

  --Larry Wall

§ xx : 1. What is the possibility of this being added
  : in the future?

  In the near future, the probability is close to zero.
  In the distant future, I'll be dead, and posterity can
  do whatever they like... :-)

  --Larry Wall

§ xxi Well, you know, Hubbard had a bunch of people sworn
  to commit suicide when he died. So of course he never
  officially died...

  --Larry Wall in <199804141540.IAA05247@​wall.org>

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

Aside​: the referenced message includes a romaniacally-numbered list of Larry
quotations. I've often joked about all Perl programmers owning a copy of The
Collected Sayings of Chairman Larry, but Tom's preferred metaphors run more
to the canon than the commie.

On Tue, Nov 18, 2008 at 04​:23​:41PM -0700, Tom Christiansen wrote​:

On Mon, 17 Nov 2008 10​:38​:07 PST, Chip Salzenberg wrote​:

A first step in handling write errors would be dying if either
print or close returns false. This would be a significant
change from the old ways, but I think it's OK ...
Indeed
data loss is one of the primary evils, after all.
Agreed​: vehemently, vociferously, and vituperatively.

Yay. Action item #1​: die on output failure in a -i loop.

You go on at length (usefully - I have much context to swap back in) about
Perl's cavalier disgregard for reporting flush failures in many
circumstances. I'll take that exposition as a guide, first, to help complete
action item #1. Afterwards, though, I'll be interested in revisiting the
broader question of reporting all such failures correctly. Of course I will
run afoul of §xxii, but c'est la vie.

§ xxii There's often more than one correct thing.
  There's often more than one right thing.
  There's often more than one obvious thing.

--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

On Tue, Nov 18, 2008 at 01​:11​:27PM +0000, Ben Morrow wrote​:

Quoth Mordechai.T.Abzug@​nasa.gov (Morty Abzug)​:

On Mon, Nov 17, 2008 at 10​:38​:05AM -0800, Chip Salzenberg via RT wrote​:

As for restoring the partially read file back to its original name
on error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

In my use case, I was looking to use Perl to in-place edit system
configs. The processing is minimal. I would much rather that failure
automatically revert the config rather than leave a corrupt config
(neither the old nor the new) in place.

perl \-pi~ \-e'\.\.\. END \{ $? and rename "$ARGV~"\, $ARGV \}'

That's a clever idiom that will work in many cases. Kinda ugly, but oh
well. I wonder if we already ensure that $ARGV is undef after the entire
while (<>) {} loop is complete. If not, we should. Otherwise a die or
nonzero exit or setting of $? in another END block could trigger a spurious
rename. If we do appropriately unset $ARGV, then​:

  perl -pi~ -e'... END { $? and $ARGV and rename "$ARGV~", $ARGV }'

I wonder whether we could make this a standard but optional feature of
inplace editing. How about​:

  ${^INPLACE_SAFE}++; # nop in old Perls, forces rename on error in new Perls

And what about the case where one has an error when using -i without a
backup [...]

But that's not easy, since the original file has been unlinked and Unix
doesn't have a 'flink' syscall. [...] Just use a backup if you need to
recover from errors.

Indeed.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From ben@morrow.me.uk

Quoth chip@​pobox.com (Chip Salzenberg)​:

On Tue, Nov 18, 2008 at 01​:11​:27PM +0000, Ben Morrow wrote​:

Quoth Mordechai.T.Abzug@​nasa.gov (Morty Abzug)​:

On Mon, Nov 17, 2008 at 10​:38​:05AM -0800, Chip Salzenberg via RT wrote​:

As for restoring the partially read file back to its original name
on error​: I actually think that's a bad idea. There may be a LOT of
processing involved, and partial output is definitely better than no
output in some circumstances.

In my use case, I was looking to use Perl to in-place edit system
configs. The processing is minimal. I would much rather that failure
automatically revert the config rather than leave a corrupt config
(neither the old nor the new) in place.

perl \-pi~ \-e'\.\.\. END \{ $? and rename "$ARGV~"\, $ARGV \}'

That's a clever idiom that will work in many cases. Kinda ugly, but oh
well. I wonder if we already ensure that $ARGV is undef after the entire
while (<>) {} loop is complete. If not, we should. Otherwise a die or
nonzero exit or setting of $? in another END block could trigger a spurious
rename. If we do appropriately unset $ARGV, then​:

 perl \-pi~ \-e'\.\.\. END \{ $? and $ARGV and rename "$ARGV~"\, $ARGV \}'

I wonder whether we could make this a standard but optional feature of
inplace editing. How about​:

$\{^INPLACE\_SAFE\}\+\+;   \# nop in old Perls\, forces rename on error in

new Perls

Better of course would be to process from $ARGV to "$ARGV$^I", and then
rename atomically if there's no error. How about a -j switch with that
effect? It ought to be a pretty trivial patch.

Ben

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

On Thu, Nov 27, 2008 at 02​:29​:56AM +0000, Ben Morrow wrote​:

Quoth chip@​pobox.com (Chip Salzenberg)​:

On Tue, Nov 18, 2008 at 01​:11​:27PM +0000, Ben Morrow wrote​:

perl \-pi~ \-e'\.\.\. END \{ $? and rename "$ARGV~"\, $ARGV \}'

I wonder whether we could make this a standard but optional feature of
inplace editing. How about​:
${^INPLACE_SAFE}++; # nop in old Perls, forces rename on error in new Perls

Better of course would be to process from $ARGV to "$ARGV$^I", and then
rename atomically if there's no error. How about a -j switch with that
effect? It ought to be a pretty trivial patch.

I like, as long as the ${^INPLACE_SAFE} variable is still there for scripts that want
to be safe where possible and still run elsewhere. (Running 'perl5.8 -j~' will fail.)

Summarizing the plan​:

  * -ix means to set $^I="x"
  * -jx means to set $^I="x" and ${^INPLACE_SAFE}=1
  * if ${^INPLACE_SAFE} is true and $^I is nonempty, then <> does this​:
  > open ARGV, '<', $ARGV
  > open ARGVOUT, '>', "$ARGV$^I"
  > rename "$ARGV^I", $ARGV # on successful completion

Comments?
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

On Wed, Nov 26, 2008 at 05​:09​:22PM -0800, Chip Salzenberg wrote​:

Yay. Action item #1​: die on output failure in a -i loop.

And here's the patch for this part of the larger issue of not ignoring flush
and close errors. It makes make perl -i do the C equivalent of​:

  close ARGVOUT or die "-p destination​: $!";

Testing was fun. It seems that Linux's ext2 filesystem is clever about
saving tiny files in inodes. To make a file use a whole block it has to be
somewhat larger than "hello". Or so it seemed to me in casual testing. In
any case, I did in fact provoke the Perl_die() in the patch; I verified it
with a gdb breakpoint. So, yay!

This patch builds on the previously posted patch for unified saving and
restoring of errno, so be sure to apply that one too.

Share & Enjoy!

Inline Patch
diff --git a/doio.c b/doio.c
index c945216..78c6e49 100644
--- a/doio.c
+++ b/doio.c
@@ -904,7 +904,14 @@ Perl_nextargv(pTHX_ register GV *gv)
     if (io && (IoFLAGS(io) & IOf_ARGV))
 	IoFLAGS(io) |= IOf_START;
     if (PL_inplace) {
-	(void)do_close(PL_argvoutgv,FALSE);
+	IO *outio;
+	if (PL_argvoutgv && isGV_with_GP(PL_argvoutgv) && (outio = GvIO(PL_argvoutgv))) {
+	    dSAVE_ERRNO;
+	    SETERRNO(0, 0);
+	    if (!io_close(outio, FALSE))
+		Perl_die(aTHX_ "-p destination: %s", errno ? Strerror(errno) : "previous error");
+	    RESTORE_ERRNO;
+	}
 	if (io && (IoFLAGS(io) & IOf_ARGV)
 	    && PL_argvout_stack && AvFILLp(PL_argvout_stack) >= 0)
 	{


-- 

Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @hvds

Chip Salzenberg <chip@​pobox.com> wrote​:
:On Thu, Nov 27, 2008 at 02​:29​:56AM +0000, Ben Morrow wrote​:
:> Quoth chip@​pobox.com (Chip Salzenberg)​:
:> > On Tue, Nov 18, 2008 at 01​:11​:27PM +0000, Ben Morrow wrote​:
:> > > perl -pi~ -e'... END { $? and rename "$ARGV~", $ARGV }'
:> >
:> > I wonder whether we could make this a standard but optional feature of
:> > inplace editing. How about​:
:> > ${^INPLACE_SAFE}++; # nop in old Perls, forces rename on error in new Perls
:>
:> Better of course would be to process from $ARGV to "$ARGV$^I", and then
:> rename atomically if there's no error. How about a -j switch with that
:> effect? It ought to be a pretty trivial patch.
:
:I like, as long as the ${^INPLACE_SAFE} variable is still there for scripts that want
:to be safe where possible and still run elsewhere. (Running 'perl5.8 -j~' will fail.)
:
:Summarizing the plan​:
:
: * -ix means to set $^I="x"
: * -jx means to set $^I="x" and ${^INPLACE_SAFE}=1
: * if ${^INPLACE_SAFE} is true and $^I is nonempty, then <> does this​:
: > open ARGV, '<', $ARGV
: > open ARGVOUT, '>', "$ARGV$^I"
: > rename "$ARGV^I", $ARGV # on successful completion
:
:Comments?

New commandline flags are rare and expensive, but I can just about see the
value here (even if the problem being solved barely registered as an itch
for me - I usually use the cavalier '-pi -wle').

As defined here, when x is empty, -j is the same as -i. Should it do
something more useful, like backup to a tempfile and delete on successful
completion? INPLACE_SAFE without droppings would suit my style better.

Hugo

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

On Thu, Nov 27, 2008 at 08​:20​:02AM +0000, hv@​crypt.org wrote​:

Chip Salzenberg <chip@​pobox.com> wrote​:
:Summarizing the plan​:
:
: * -ix means to set $^I="x"
: * -jx means to set $^I="x" and ${^INPLACE_SAFE}=1
: * if ${^INPLACE_SAFE} is true and $^I is nonempty, then <> does this​:
: > open ARGV, '<', $ARGV
: > open ARGVOUT, '>', "$ARGV$^I"
: > rename "$ARGV^I", $ARGV # on successful completion
:
:Comments?

New commandline flags are rare and expensive, but I can just about see the
value here (even if the problem being solved barely registered as an itch
for me - I usually use the cavalier '-pi -wle').

I hate to do it, but we do want safety convenient, do we not? We could be
somewhat hackier and take advantage of the fact that "*" is already special
in the -i string. Using a double star means twice as good a backup​:

  -i.bak - unsafe .bak
  -i'*.bak' - unsafe .bak
  -i'**.bak' - safe .bak

Simpler in a way, more complex in another way.

As defined here, when x is empty, -j is the same as -i. Should it do
something more useful, like backup to a tempfile and delete on successful
completion?

Yes, this is a Good Idea.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @chipdude

On Wed, Nov 26, 2008 at 11​:32​:31PM -0800, Chip Salzenberg wrote​:

On Wed, Nov 26, 2008 at 05​:09​:22PM -0800, Chip Salzenberg wrote​:

Yay. Action item #1​: die on output failure in a -i loop.

And here's the patch for this part of the larger issue of not ignoring flush
and close errors.

... but it also fails lib/warnings.t. I think this is more patch bug than
bad test. More as it happens.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 27, 2008

From @Tux

On Thu, 27 Nov 2008 03​:28​:50 -0800, Chip Salzenberg <chip@​pobox.com>
wrote​:

On Thu, Nov 27, 2008 at 08​:20​:02AM +0000, hv@​crypt.org wrote​:

Chip Salzenberg <chip@​pobox.com> wrote​:
:Summarizing the plan​:
:
: * -ix means to set $^I="x"
: * -jx means to set $^I="x" and ${^INPLACE_SAFE}=1
: * if ${^INPLACE_SAFE} is true and $^I is nonempty, then <> does this​:
: > open ARGV, '<', $ARGV
: > open ARGVOUT, '>', "$ARGV$^I"
: > rename "$ARGV^I", $ARGV # on successful completion
:
:Comments?

New commandline flags are rare and expensive, but I can just about see the
value here (even if the problem being solved barely registered as an itch
for me - I usually use the cavalier '-pi -wle').

I hate to do it, but we do want safety convenient, do we not? We could be
somewhat hackier and take advantage of the fact that "*" is already special
in the -i string. Using a double star means twice as good a backup​:

-i.bak - unsafe .bak
-i'*.bak' - unsafe .bak
-i'**.bak' - safe .bak
  -i'**' - safe (using tmpfile), remove tmp on success

Simpler in a way, more complex in another way.

As defined here, when x is empty, -j is the same as -i. Should it do
something more useful, like backup to a tempfile and delete on successful
completion?

Yes, this is a Good Idea.

--
H.Merijn Brand Amsterdam Perl Mongers http​://amsterdam.pm.org/
using & porting perl 5.6.2, 5.8.x, 5.10.x, 5.11.x on HP-UX 10.20, 11.00,
11.11, 11.23, and 11.31, SuSE 10.1, 10.2, and 10.3, AIX 5.2, and Cygwin.
http​://mirrors.develooper.com/hpux/ http​://www.test-smoke.org/
http​://qa.perl.org http​://www.goldmark.org/jeff/stupid-disclaimers/

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 28, 2008

From @chipdude

On Thu, Nov 27, 2008 at 03​:30​:07AM -0800, Chip Salzenberg wrote​:

On Wed, Nov 26, 2008 at 11​:32​:31PM -0800, Chip Salzenberg wrote​:

On Wed, Nov 26, 2008 at 05​:09​:22PM -0800, Chip Salzenberg wrote​:

Yay. Action item #1​: die on output failure in a -i loop.
And here's the patch for this part of the larger issue of not ignoring flush
and close errors.
... but it also fails lib/warnings.t.

But not any more! Here's a new cut. It only reports flush failures caused
by system call failure, not previously noted failures recorded as ferror(fp).

Inline Patch
diff --git a/doio.c b/doio.c
index c945216..ce57ae1 100644
--- a/doio.c
+++ b/doio.c
@@ -904,7 +904,14 @@ Perl_nextargv(pTHX_ register GV *gv)
     if (io && (IoFLAGS(io) & IOf_ARGV))
 	IoFLAGS(io) |= IOf_START;
     if (PL_inplace) {
-	(void)do_close(PL_argvoutgv,FALSE);
+	IO *outio;
+	if (PL_argvoutgv && isGV_with_GP(PL_argvoutgv) && (outio = GvIO(PL_argvoutgv))) {
+	    dSAVE_ERRNO;
+	    SETERRNO(0, 0);
+	    if (!io_close(outio, FALSE) && errno) /* only report _new_ failure to flush */
+		Perl_die(aTHX_ "-p destination: %s", Strerror(errno));
+	    RESTORE_ERRNO;
+	}
 	if (io && (IoFLAGS(io) & IOf_ARGV)
 	    && PL_argvout_stack && AvFILLp(PL_argvout_stack) >= 0)
 	{

-- 

Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 28, 2008

From @chipdude

On Thu, Nov 27, 2008 at 12​:52​:08PM +0100, H.Merijn Brand wrote​:

On Thu, 27 Nov 2008 03​:28​:50 -0800, Chip Salzenberg <chip@​pobox.com> wrote​:

-i.bak - unsafe .bak
-i'*.bak' - unsafe .bak
-i'**.bak' - safe .bak
-i'**' - safe (using tmpfile), remove tmp on success

Yup, it all fits.

Code that wants to use safe renaming when possible and work elsewhere will
have to check $], e.g.​:

  BEGIN { $^I = $] >= 5.11 ? '**.bak' : '*.bak' }

or

  #!/usr/bin/perl -pi~
  BEGIN { $^I = '**~' if $] >= 5.11 }

which doesn't quite match the simplicity of the alternatively proposed

  ${^INPLACE_SAFE}++

but I can certainly live with it.

Objections?
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 29, 2008

From ben@morrow.me.uk

Quoth chip@​pobox.com (Chip Salzenberg)​:

On Thu, Nov 27, 2008 at 02​:29​:56AM +0000, Ben Morrow wrote​:

Quoth chip@​pobox.com (Chip Salzenberg)​:

On Tue, Nov 18, 2008 at 01​:11​:27PM +0000, Ben Morrow wrote​:

perl \-pi~ \-e'\.\.\. END \{ $? and rename "$ARGV~"\, $ARGV \}'

I wonder whether we could make this a standard but optional feature of
inplace editing. How about​:
${^INPLACE_SAFE}++; # nop in old Perls, forces rename on error
in new Perls

Better of course would be to process from $ARGV to "$ARGV$^I", and then
rename atomically if there's no error. How about a -j switch with that
effect? It ought to be a pretty trivial patch.

I like, as long as the ${^INPLACE_SAFE} variable is still there for
scripts that want
to be safe where possible and still run elsewhere. (Running 'perl5.8
-j~' will fail.)

Summarizing the plan​:

* -ix means to set $^I="x"
* -jx means to set $^I="x" and ${^INPLACE_SAFE}=1
* if ${^INPLACE_SAFE} is true and $^I is nonempty, then <> does this​:
> open ARGV, '<', $ARGV
> open ARGVOUT, '>', "$ARGV$^I"
> rename "$ARGV^I", $ARGV # on successful completion

Comments?

I was thinking more along the lines of

  -ix $^I = "x", $^J = undef
  Renames $ARGV to $ARGV$^I, processes from $ARGV$^I to $ARGV.

  -i $^I = "", $^J = undef
  Unlinks $ARGV, processes from unlinked file to $ARGV.

  -jx $^I = "", $^J = "x"
  Processes from $ARGV to $ARGV$^J, renames $ARGV$^J -> $ARGV on
  success.

  -j $^I = "", $^J = ""
  Processes from $ARGV to tempfile, renames tempfile -> $ARGV on
  success.

  -ix -jy $^I = "x", $^J = "y"
  Processes from $ARGV to $ARGV$^J; on success hardlink $ARGV as
  $ARGV$^I then rename $ARGV$^J -> $ARGV. On platforms without
  hardlinks rename instead of hardlinking, and warn (under -w)
  that the replacement is not atomic.

  -ix -j $^I = "x", $^J = ""
  Processes from $ARGV to tempfile; on success hardlink and rename
  as above.

Setting $^J from the program should work as expected. Any -j option
implies -i, so to test for this functionality

  $^J = "#";
  if (not defined $^I) {
  # we don't have atomic in-place; set $^I instead
  }

I think using $^J rather than something like ${^INPLACE_SAFE} is much
less confusing. If we're going to give a command-line option to this
then a single-letter $^X is not a big deal; and there is already an
association between the -x option and the $^X variable for several x.

Ben

@p5pRT

This comment has been minimized.

Copy link
Author

commented Nov 29, 2008

From @sciurius

Chip Salzenberg <chip@​pobox.com> writes​:

Objections?

Not really, but I'd suggest a different approach.

Some time ago I tried to duplicate the <> functionality to make it
possible to handle the 'magic open' complications. The idea was to
have a module that, when use-d, would take over all <> processing.
This turned out to be not completely possible since there are some
aspects of <> that could not be implemented in plain Perl. See below.

So my alternative approach would be to first sort this out, making it
possible for a user module to completely take over <> functionality.
Issues like error handling and safe inplace editing can then easily be
handled with appropriate modules.

= What didn't work? IIRC there were issues with not being able to
  override eof (eof, eof() and eof(FILE)). Also, overloading <>
  only does scalar IO, since it ignores context.

-- Johan

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @chipdude

On Sat, Nov 29, 2008 at 11​:14​:26PM +0100, Johan Vromans wrote​:

Chip Salzenberg <chip@​pobox.com> writes​:

Objections?
[...]
So my alternative approach would be to first sort this out, making it
possible for a user module to completely take over <> functionality.
Issues like error handling and safe inplace editing can then easily be
handled with appropriate modules.

That's a clever thought. Adding -Meditsafe (just to pick a nice pragma) to
a command line isn't as catchy as -j~ or -i'**~' but it does offer an easy
upgrade path for backward compatibility​:

  eval "use editsafe"; # error OK

And you just know that Ingy will find a clever use for overloaded <>.

= What didn't work? IIRC there were issues with not being able to
override eof (eof, eof() and eof(FILE)). Also, overloading <>
only does scalar IO, since it ignores context.

So if we had tied eof() processing and READLINE was responsive to context,
it could work?
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @sciurius

Chip Salzenberg <chip@​pobox.com> writes​:

Adding -Meditsafe (just to pick a nice pragma) to
a command line isn't as catchy as -j~ or -i'**~' but it does offer an easy
upgrade path for backward compatibility​:

That's a small price to pay for gaining an enormous amount of
flexibility.

So if we had tied eof() processing and READLINE was responsive to
context, it could work?

Maybe a pseudo-core entry​: CORE​::GLOBAL​::eof_argv ?

As for the READLINE and <>​: I'm not sure if it is a READLINE problem.
The <> problem is described in overload.pm.

-- Johan

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @chipdude

On Mon, Dec 01, 2008 at 11​:37​:49PM +0100, Johan Vromans wrote​:

Chip Salzenberg <chip@​pobox.com> writes​:

So if we had tied eof() processing and READLINE was responsive to
context, it could work?

Maybe a pseudo-core entry​: CORE​::GLOBAL​::eof_argv ?

I perceive there are two basic approaches that can be taken.

One of them provides hooks in and/or out to user-level code to handle the
various events surrounding standard <> processing, but the readline and
print operations would remain unchanged (presumably for speed purposes).
This seems to be what you're talking about.

The other of them would be less hacky but perhaps not efficient enough for
some purposes. Perl code can open ARGV and ARGVOUT and/or assign tied
filehandles to them. Then all the <> processing, period, happens via
"normal" (ahem) tied filehandle magic. This is something that could be done
today, modulo the '<> in list context' problem mentioned in overload.pm.

Is this a good summary?

Hard to know which of them is worth doing.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @sciurius

[Quoting Chip Salzenberg, on December 1 2008, 15​:16, in "Re​: [perl #1154] -pi"]

The other of them would be less hacky but perhaps not efficient enough for
some purposes. Perl code can open ARGV and ARGVOUT and/or assign tied
filehandles to them. Then all the <> processing, period, happens via
"normal" (ahem) tied filehandle magic. This is something that could be done
today, modulo the '<> in list context' problem mentioned in overload.pm.

And modulo the eof() facility.

This is the approach I tried in Iterator​::Diamond.

-- Johan

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @sciurius

[Quoting Chip Salzenberg, on December 1 2008, 15​:16, in "Re​: [perl #1154] -pi"]

Hard to know which of them is worth doing.

Personally I think the added flexibility and safety will outweigh the
small (?) loss in performance.

Sometimes, often, reliability is more important than speed.

-- Johan

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @chipdude

On Tue, Dec 02, 2008 at 12​:22​:38AM +0100, Johan Vromans wrote​:

[Quoting Chip Salzenberg, on December 1 2008, 15​:16, in "Re​: [perl #1154] -pi"]

The other of them would be less hacky but perhaps not efficient enough for
some purposes. Perl code can open ARGV and ARGVOUT and/or assign tied
filehandles to them. Then all the <> processing, period, happens via
"normal" (ahem) tied filehandle magic. This is something that could be done
today, modulo the '<> in list context' problem mentioned in overload.pm.

And modulo the eof() facility.

Couldn't the distinction between C<eof> on the one hand, and C<eof()> (which
is the same as C<eof(ARGV)>) on the other hand, be handled entirely in
pp_eof? The current tied EOF feature would then be adequate, no?
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 1, 2008

From @chipdude

On Mon, Dec 01, 2008 at 03​:33​:47PM -0800, Chip Salzenberg wrote​:

On Tue, Dec 02, 2008 at 12​:22​:38AM +0100, Johan Vromans wrote​:

[Quoting Chip Salzenberg, on December 1 2008, 15​:16, in "Re​: [perl #1154] -pi"]

The other of them would be less hacky but perhaps not efficient enough for
some purposes. Perl code can open ARGV and ARGVOUT and/or assign tied
filehandles to them. Then all the <> processing, period, happens via
"normal" (ahem) tied filehandle magic. This is something that could be done
today, modulo the '<> in list context' problem mentioned in overload.pm.

And modulo the eof() facility.

Couldn't the distinction between C<eof> on the one hand, and C<eof()> (which
is the same as C<eof(ARGV)>) on the other hand, be handled entirely in
pp_eof? The current tied EOF feature would then be adequate, no?

Hrm. I think I'm all wet here.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From tchrist@perl.com

On Thu, Nov 27, 2008 at 12​:52​:08PM +0100, H.Merijn Brand wrote​:

On Thu, 27 Nov 2008 03​:28​:50 -0800, Chip Salzenberg <chip@​pobox.com> wrote​:

-i.bak - unsafe .bak
-i'*.bak' - unsafe .bak
-i'**.bak' - safe .bak
-i'**' - safe (using tmpfile), remove tmp on success

Yup, it all fits.

Code that wants to use safe renaming when possible and work elsewhere will
have to check $], e.g.​:

BEGIN { $^I = $] >= 5.11 ? '**.bak' : '*.bak' }

or

#!/usr/bin/perl -pi~
BEGIN { $^I = '**~' if $] >= 5.11 }

which doesn't quite match the simplicity of the alternatively proposed

${^INPLACE_SAFE}++

but I can certainly live with it.

Objections?

Um, Chip, are people *really* claiming that fixing the -i output
flush bug is something that would be "backwards incompatible",
and thus prequire some notice that such a thing is desired?!?

If so, I'm boggling. If not, I need to read more.

--tom

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From tchrist@perl.com

= What didn't work? IIRC there were issues with not being able to
override eof (eof, eof() and eof(FILE)). Also, overloading <>
only does scalar IO, since it ignores context.

Which is why I advise overriding ->READLINE() on a tied *HANDLE's
dark-object if one wants that sort of thing, not C<overload>ing the
circumfixial <> applied to an object proper​: for with ->READLINE(),
you get correct wantarray() info; while the other way, you don't.

--tom

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From tchrist@perl.com

So if we had tied eof() processing and READLINE was
responsive to context, it could work?

I'm wondering about your mood there, Chip. Not your
fault, but rather English's, the language being a bit
defective on ample, separate, distinguishible forms for
some useful shadings of meaning. Two possibilities
arise for valid sequences of tenses, and I don't know
which to apply here so to suss out your intended meaning​:

1. If <past subj> then <conditional>.

2. If <past indic> then <past indic>.

If you meant an exhortation,
  (mode=[optative] subjunctive,
  "O! If only 'twere so!" or "God grant ..."),
with a consequent necessarily in the conditional mood​:

  eg1​: If only READLINE() were responsive to context,
  then lo! it would work fine.

then your exhortation has long been granted.

If, however, you meant a simple statement of fact
  (mode=indicative, tense=past, voice=active, completedness
  or finitude possibly perfect, but more probably imperfect),
which takes therefore a consequent likewise in the past
indicative, not in any subjunctive or conditional mode​:

  eg2​: If READLINE() was responsive to context in 1994,
  then I didn't know about it.

Then I'd have to check *when* it became true. But my
tests show it's fine now.

The eof problem may be different.

Allegedly, per perltie(1), the ->EOF() method can be overridden
on a tied handle. However, I guess you'd need to use the hasargs
part of your frametrace to tell eof from eof(). Maybe.

---tom

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From tchrist@perl.com

On Tue, Dec 02, 2008 at 12​:22​:38AM +0100, Johan Vromans wrote​:

[Quoting Chip Salzenberg, on December 1 2008, 15​:16, in "Re​:
[perl #1154] -pi"]

The other of them would be less hacky but perhaps not
efficient enough for some purposes. Perl code can open ARGV
and ARGVOUT and/or assign tied filehandles to them. Then
all the <> processing, period, happens via "normal" (ahem)
tied filehandle magic. This is something that could be done
today, modulo the '<> in list context' problem mentioned in
overload.pm.

And modulo the eof() facility.

(Thank goodness modulo isn't context-sensitive! %=)

Couldn't the distinction between C<eof> on the one hand, and
C<eof()> (which is the same as C<eof(ARGV)>) on the other hand,
be handled entirely in pp_eof? The current tied EOF feature
would then be adequate, no?

Hm, now that's an idea.

Having it be the *lexer* who autoqualified stuff like
@​ARGV into @​main​::ARGV didn't seem too terrible, but
the side effect of having &ARGV also go to main was,
um, odd (or @​ENV etc).

However, I also remember how it seemed cleaner
to have the *parser* be he who knew to split

  use Foo;

to compile into

  BEGIN {
  require Foo;
  Foo​::->import();
  }

as opposed to

  use Foo ();

which compiles into

  BEGIN {
  require Foo;
  }

Although that's not truly how it works, it looks like that's
how it works. The difference is the

  use Foo;

returns the return value of Foo​::->import(), while use Foo ()
returns (). But only people doing funny things with eval
q{use Foo} vs eval q{use Foo ()} should ever notice any
difference, though.

--tom

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From @chipdude

On Mon, Dec 01, 2008 at 10​:15​:16PM -0700, Tom Christiansen wrote​:

Allegedly, per perltie(1), the ->EOF() method can be overridden
on a tied handle. However, I guess you'd need to use the hasargs
part of your frametrace to tell eof from eof(). Maybe.

Hm ... hasargs. An intriguing suggestion. It is conceivable that, given
tied ARGV, C<eof()> could do the C equivalent of​:

  local @​_ = (tied *ARGV);
  &{ $_[0]->can('EOF') };

which would leave hasargs false, thus distinguishing itself from C<eof> and
C<eof(ARGV)>, while still providing a valid $_[0]. It would also, however,
interact poorly with simple AUTOLOADing.

Fortunately, a simpler alternative presents itself​: Why not just add an
additional "special" parameter to the existing EOF call? Given a tied ARGV,
then​:

  eof(ARGV) -> +(tied *ARGV)->EOF;
  eof() -> +(tied *ARGV)->EOF(1);

Presto magic(k)o.

Now a digression for the human-language fans in the audience​:

So if we had tied eof() processing and READLINE was
responsive to context, it could work?

I'm wondering about your mood there, Chip. Not your
fault, but rather English's [...]

Indeed. Having once learned functional Spanish, I find that English's lack
of a proper subjunctive chafes from time to time.

If you meant an exhortation,
(mode=[optative] subjunctive,
"O! If only 'twere so!" or "God grant ..."),
with a consequent necessarily in the conditional mood​:

eg1&#8203;: If only READLINE\(\) were responsive to context\, 
     then lo\! it would work fine\.

then your exhortation has long been granted.

This is, in fact, what I meant WRT READLINE. Thank you for the entertaining
and erudite request for clarification.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From @chipdude

On Mon, Dec 01, 2008 at 09​:46​:32PM -0700, Tom Christiansen wrote​:

Um, Chip, are people *really* claiming that fixing the -i output
flush bug is something that would be "backwards incompatible",
and thus prequire some notice that such a thing is desired?!?

Heh, no. We're now discussing a further level of safety​: The renaming of
finished files to their target names *after* completed output, as opposed to
the status quo of renaming first and _then_ writing.
--
Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 2, 2008

From @chipdude

On Mon, Dec 01, 2008 at 10​:41​:35PM -0800, Chip Salzenberg wrote​:

Fortunately, a simpler alternative presents itself​: Why not just add an
additional "special" parameter to the existing EOF call? Given a tied ARGV,
then​:

eof(ARGV) -> +(tied *ARGV)->EOF;
eof() -> +(tied *ARGV)->EOF(1);

Presto magic(k)o.

And, here's a patch that works and solves the problem. The details turned
out a bit different from my first idea, but it seems like it came out OK.
As the pod says about the EOF method​:

  Starting with Perl 5.12, an additional integer parameter will be passed. It
  will be zero if C<eof> is called without parameter; C<1> if C<eof> is given
  a filehandle as a parameter, e.g. C<eof(FH)>; and C<2> in the very special
  case that the tied filehandle is C<ARGV> and C<eof> is called with an empty
  parameter list, e.g. C<eof()>.

Frankly, my use of magic numbers is a little old-school, but what the heck...
by the time you're tying *ARGV, I think a little magic is to be expected.

Share & Enjoy!

Inline Patch
diff --git a/pod/perltie.pod b/pod/perltie.pod
index 162272b..9f26473 100644
--- a/pod/perltie.pod
+++ b/pod/perltie.pod
@@ -952,6 +952,19 @@ This method will be called when the C<getc> function is called.
 
     sub GETC { print "Don't GETC, Get Perl"; return "a"; }
 
+=item EOF this
+X<EOF>
+
+This method will be called when the C<eof> function is called.
+
+Starting with Perl 5.12, an additional integer parameter will be passed.  It
+will be zero if C<eof> is called without parameter; C<1> if C<eof> is given
+a filehandle as a parameter, e.g. C<eof(FH)>; and C<2> in the very special
+case that the tied filehandle is C<ARGV> and C<eof> is called with an empty
+parameter list, e.g. C<eof()>.
+
+    sub EOF { not length $stringbuf }
+
 =item CLOSE this
 X<CLOSE>
 
diff --git a/pp_sys.c b/pp_sys.c
index 211633b..b200eb1 100644
--- a/pp_sys.c
+++ b/pp_sys.c
@@ -2014,51 +2014,60 @@ PP(pp_eof)
 {
     dVAR; dSP;
     GV *gv;
+    IO *io;
+    MAGIC *mg;
 
-    if (MAXARG == 0) {
-	if (PL_op->op_flags & OPf_SPECIAL) {	/* eof() */
-	    IO *io;
-	    gv = PL_last_in_gv = GvEGV(PL_argvgv);
-	    io = GvIO(gv);
-	    if (io && !IoIFP(io)) {
-		if ((IoFLAGS(io) & IOf_START) && av_len(GvAVn(gv)) < 0) {
-		    IoLINES(io) = 0;
-		    IoFLAGS(io) &= ~IOf_START;
-		    do_open(gv, "-", 1, FALSE, O_RDONLY, 0, NULL);
-		    if ( GvSV(gv) ) {
-			sv_setpvs(GvSV(gv), "-");
-		    }
-		    else {
-			GvSV(gv) = newSVpvs("-");
-		    }
-		    SvSETMAGIC(GvSV(gv));
-		}
-		else if (!nextargv(gv))
-		    RETPUSHYES;
-	    }
-	}
+    if (MAXARG)
+	gv = PL_last_in_gv = MUTABLE_GV(POPs);	/* eof(FH) */
+    else if (PL_op->op_flags & OPf_SPECIAL)
+	gv = PL_last_in_gv = GvEGV(PL_argvgv);	/* eof() - ARGV magic */
+    else
+	gv = PL_last_in_gv;			/* eof */
+
+    if (!gv)
+	RETPUSHNO;
+
+    if ((io = GvIO(gv)) && (mg = SvTIED_mg((const SV *)io, PERL_MAGIC_tiedscalar))) {
+	PUSHMARK(SP);
+	XPUSHs(SvTIED_obj(MUTABLE_SV(io), mg));
+	/*
+	 * in Perl 5.12 and later, the additional paramter is a bitmask:
+	 * 0 = eof
+	 * 1 = eof(FH)
+	 * 2 = eof()  <- ARGV magic
+	 */
+	if (MAXARG)
+	    mPUSHi(1);		/* 1 = eof(FH) - simple, explicit FH */
+	else if (PL_op->op_flags & OPf_SPECIAL)
+	    mPUSHi(2);		/* 2 = eof()   - ARGV magic */
 	else
-	    gv = PL_last_in_gv;			/* eof */
+	    mPUSHi(0);		/* 0 = eof     - simple, implicit FH */
+	PUTBACK;
+	ENTER;
+	call_method("EOF", G_SCALAR);
+	LEAVE;
+	SPAGAIN;
+	RETURN;
     }
-    else
-	gv = PL_last_in_gv = MUTABLE_GV(POPs);	/* eof(FH) */
 
-    if (gv) {
-	IO * const io = GvIO(gv);
-	MAGIC * mg;
-	if (io && (mg = SvTIED_mg((const SV *)io, PERL_MAGIC_tiedscalar))) {
-	    PUSHMARK(SP);
-	    XPUSHs(SvTIED_obj(MUTABLE_SV(io), mg));
-	    PUTBACK;
-	    ENTER;
-	    call_method("EOF", G_SCALAR);
-	    LEAVE;
-	    SPAGAIN;
-	    RETURN;
+    if (!MAXARG && (PL_op->op_flags & OPf_SPECIAL)) {	/* eof() */
+	if (io && !IoIFP(io)) {
+	    if ((IoFLAGS(io) & IOf_START) && av_len(GvAVn(gv)) < 0) {
+		IoLINES(io) = 0;
+		IoFLAGS(io) &= ~IOf_START;
+		do_open(gv, "-", 1, FALSE, O_RDONLY, 0, NULL);
+		if (GvSV(gv))
+		    sv_setpvs(GvSV(gv), "-");
+		else
+		    GvSV(gv) = newSVpvs("-");
+		SvSETMAGIC(GvSV(gv));
+	    }
+	    else if (!nextargv(gv))
+		RETPUSHYES;
 	}
     }
 
-    PUSHs(boolSV(!gv || do_eof(gv)));
+    PUSHs(boolSV(do_eof(gv)));
     RETURN;
 }
 
diff --git a/t/op/tiehandle.t b/t/op/tiehandle.t
index 735a25c..dbd0846 100755
--- a/t/op/tiehandle.t
+++ b/t/op/tiehandle.t
@@ -10,7 +10,7 @@ my $data = "";
 my @data = ();
 
 require './test.pl';
-plan(tests => 50);
+plan(tests => 63);
 
 sub compare {
     local $Level = $Level + 1;
@@ -61,6 +61,11 @@ sub READ {
     3;
 }
 
+sub EOF {
+    ::compare(EOF => @_);
+    @data ? '' : 1;
+}
+
 sub WRITE {
     ::compare(WRITE => @_);
     $data = substr($_[1],$_[3] || 0, $_[2]);
@@ -69,7 +74,6 @@ sub WRITE {
 
 sub CLOSE {
     ::compare(CLOSE => @_);
-    
     5;
 }
 
@@ -92,11 +96,18 @@ is($r, 1);
 $r = printf $fh @expect[2,3];
 is($r, 2);
 
-$text = (@data = ("the line\n"))[0];
+@data = ("the line\n");
+@expect = (EOF => $ob, 1);
+is(eof($fh), '');
+
+$text = $data[0];
 @expect = (READLINE => $ob);
 $ln = <$fh>;
 is($ln, $text);
 
+@expect = (EOF => $ob, 0);
+is(eof, 1);
+
 @expect = ();
 @in = @data = qw(a line at a time);
 @line = <$fh>;
@@ -273,3 +284,22 @@ is($r, 1);
     sub READLINE { "foobar\n" }
 }
 
+{
+    # make sure the new eof() features work with @ARGV magic
+    local *ARGV;
+    @ARGV = ('haha');
+
+    @expect = (TIEHANDLE => 'Implement');
+    $ob = tie *ARGV, 'Implement';
+    is(ref($ob),  'Implement');
+    is(tied(*ARGV), $ob);
+
+    @data = ("stuff\n");
+    @expect = (EOF => $ob, 1);
+    is(eof(ARGV), '');
+    @expect = (EOF => $ob, 2);
+    is(eof(), '');
+    shift @data;
+    @expect = (EOF => $ob, 0);
+    is(eof, 1);
+}


-- 

Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 13, 2008

From @chipdude

The below patch is not in blead, at least it seems not to be, according to
the git mirror. It reports flush failures during -p processing that were
caused by system call failure. Is there a problem with it? Could it be
applied, please?

Inline Patch
diff --git a/doio.c b/doio.c
index c945216..ce57ae1 100644
--- a/doio.c
+++ b/doio.c
@@ -904,7 +904,14 @@ Perl_nextargv(pTHX_ register GV *gv)
     if (io && (IoFLAGS(io) & IOf_ARGV))
 	IoFLAGS(io) |= IOf_START;
     if (PL_inplace) {
-	(void)do_close(PL_argvoutgv,FALSE);
+	IO *outio;
+	if (PL_argvoutgv && isGV_with_GP(PL_argvoutgv) && (outio = GvIO(PL_argvoutgv))) {
+	    dSAVE_ERRNO;
+	    SETERRNO(0, 0);
+	    if (!io_close(outio, FALSE) && errno) /* only report _new_ failure to flush */
+		Perl_die(aTHX_ "-p destination: %s", Strerror(errno));
+	    RESTORE_ERRNO;
+	}
 	if (io && (IoFLAGS(io) & IOf_ARGV)
 	    && PL_argvout_stack && AvFILLp(PL_argvout_stack) >= 0)
 	{


-- 

Chip Salzenberg twitter​:chipsalz
"UTOPIA PLANITIA IS IN ORBIT is the new HAN SHOT FIRST" - Crisper Than Thou

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 13, 2008

From @rgs

2008/12/13 Chip Salzenberg <chip@​pobox.com>​:

The below patch is not in blead, at least it seems not to be, according to
the git mirror. It reports flush failures during -p processing that were
caused by system call failure. Is there a problem with it? Could it be
applied, please?

I see that it was applied to bleadperl two days ago by Steve (to
perforce -- the last changes from there are not yet in git).

diff --git a/doio.c b/doio.c
index c945216..ce57ae1 100644
--- a/doio.c
+++ b/doio.c
@​@​ -904,7 +904,14 @​@​ Perl_nextargv(pTHX_ register GV *gv)
if (io && (IoFLAGS(io) & IOf_ARGV))
IoFLAGS(io) |= IOf_START;
if (PL_inplace) {
- (void)do_close(PL_argvoutgv,FALSE);
+ IO *outio;
+ if (PL_argvoutgv && isGV_with_GP(PL_argvoutgv) && (outio = GvIO(PL_argvoutgv))) {
+ dSAVE_ERRNO;
+ SETERRNO(0, 0);
+ if (!io_close(outio, FALSE) && errno) /* only report _new_ failure to flush */
+ Perl_die(aTHX_ "-p destination​: %s", Strerror(errno));
+ RESTORE_ERRNO;
+ }
if (io && (IoFLAGS(io) & IOf_ARGV)
&& PL_argvout_stack && AvFILLp(PL_argvout_stack) >= 0)
{

@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 13, 2008

@rgs - Status changed from 'open' to 'resolved'

@p5pRT p5pRT closed this Dec 13, 2008
@p5pRT

This comment has been minimized.

Copy link
Author

commented Dec 14, 2008

From Mordechai.T.Abzug@nasa.gov

When I attempted to verify the fix using bleadperl, the bug still
seemed to be there. I didn't get an error, and the return status was
0. Am I missing something?

Methodology used to install bleadperl​:

  mkdir ~/bleadperl ~/localperl &&
  cd ~/bleadperl &&
  rsync -avz rsync​://public.activestate.com/perl-current/ . &&
  ./Configure -des -Dusedevel -Dprefix=$HOME/localperl &&
  make test &&
  make install; finished

Methodology used to test​:

#!/bin/sh

# script to demo problem of perl not detecting write errors with -pi
# requires root on a linux system

IMG=/root/floppy.img
IMG_DIR=/root
IMG_SIZE=1440
BS=1024
SOURCE=/dev/zero
MOUNTPOINT=/mnt
MKFS_ARGS="-t ext2"
MOUNT_ARGS="-o loop"
FILE=bar
FILE_SIZE=900
#PERL=/usr/bin/perl
PERL=/home/morty/localperl/bin/perl5.11.0

echo Perl version​:
$PERL -v

cd / &&
mkdir -p $IMG_DIR &&
dd if=$SOURCE of=$IMG bs=$BS count=$IMG_SIZE &&
mkfs $MKFS_ARGS $IMG </dev/null &&
mount $MOUNT_ARGS $IMG $MOUNTPOINT &&
cd $MOUNTPOINT &&
dd if=$SOURCE of=$FILE bs=$BS count=$FILE_SIZE &&
echo before running perl​: && df -k . && ls -l bar* &&
echo Running perl. . . &&
$PERL -pi~ -e 1 $FILE

echo Done. Status is​: $? && df -k . && ls -l bar*

# clean up regardless of status
cd /; umount $MOUNTPOINT; rm $IMG

Output of script above​:

Perl version​:

This is perl, v5.11.0 DEVEL35082 built for i686-linux
(with 1 registered patch, see perl -V for more detail)

Copyright 1987-2008, Larry Wall

Perl may be copied only under the terms of either the Artistic License
or the
GNU General Public License, which may be found in the Perl 5 source
kit.

Complete documentation for Perl, including FAQ lists, should be found
on
this system using "man perl" or "perldoc perl". If you have access to
the
Internet, point your browser at http​://www.perl.org/, the Perl Home
Page.

1440+0 records in
1440+0 records out
1474560 bytes (1.5 MB) copied, 0.00894997 s, 165 MB/s
mke2fs 1.40.8 (13-Mar-2008)
/root/floppy.img is not a block special device.
Proceed anyway? (y,n) Filesystem label=
OS type​: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
184 inodes, 1440 blocks
72 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=1572864
1 block group
8192 blocks per group, 8192 fragments per group
184 inodes per group

Writing inode tables​: done
Writing superblocks and filesystem accounting information​: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
900+0 records in
900+0 records out
921600 bytes (922 kB) copied, 0.00389722 s, 236 MB/s
before running perl​:
Filesystem 1K-blocks Used Available Use% Mounted on
/root/floppy.img 1412 924 416 69% /mnt
-rw-r--r-- 1 root root 921600 2008-12-14 00​:27 bar
Running perl. . .
Done. Status is​: 0
Filesystem 1K-blocks Used Available Use% Mounted on
/root/floppy.img 1412 1411 0 100% /mnt
-rw-r--r-- 1 root root 495616 2008-12-14 00​:27 bar
-rw-r--r-- 1 root root 921600 2008-12-14 00​:27 bar~

- Morty (remotely)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant
You can’t perform that action at this time.