Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
branch: master
Fetching contributors…

Cannot retrieve contributors at this time

6159 lines (4771 sloc) 257.718 kb
# -*- mode: org; -*-
#
#+TITLE: *Email discussions for December 1993*
#+OPTIONS: ^:{} author:nil
#+TAGS: From
* proposals
:From: jap
I thought I had sent this out, but none of Harry, Juergen or Dave
recalls seeing it, so I guess I did not.
Rather than responding to a number of topics together, it would help
to focus discussion if you would cut out the relevant topic to which to
add your comments and send several messages.
Sorry for the delay!
--Julian.
------------------------------------------------------------------------
It was agreed that we would not make decisions on each issue at the
meeting, but agree upon a proposal to address each item (from the list
e-mailed beforehand and from others raised at the meeting) and review
the set afterwards (for self-consistency, if nothing else!). This
message contains the set of proposals for modifications to bring 0.99
to 1.0. It may be useful to read this in conjunction with Russell and
Davids' meeting report of the discussion that surrounded a particular
issue.
1. STATIC ERRORS MIGHT BE SIGNALLED
To rename static error as violation. To note that a preparation
program must issue a diagnostic if it detects a violation. To note
that a preparation program must issue a diagnostic if it detects a
dynamic error. If the result of preparation is a runnable program,
then that program must signal any dynamic error.
JAP: further revision is to rename dynamic error as static error now
that the need to distinguish the two flavours has gone.
2. REMOVE: DEFMACRO
MACROS RENAMED SYNTAX FUNCTIONS
ADD: EXPORT SYNTAX
To expand sections 9.5 and 9.7 to note that macro definitions extend
the syntax environment and may be visible externally via the
export-syntax directive. To define the purpose and behaviour of
export-syntax in section 9. To modify 13.2.2.3 consistent with the
foregoing.
3. DEFLITERAL
To receive a description from HED for consideration.
4. LEVEL-0 TELOS CONDITIONS etc.
Moot given proposal 1.
5. ADD <wrong-number-of-arguments>
To modify 13.2
6. CASE IN FORMAT DIRECTIVES
Title is unlreated to proposal. To replace format with a collection
of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO
C syntax for format directives extended by %a. To remove scanf. HED
will e-mail a more detailed write-up.
7. ADD: READ
To add definition of read to A.14.
8. CLASS-SPECIFIC INPUT FUNCTIONS
No changes.
9. INVARIANT GF APPLICATION BEHAVIOUR
No changes.
10. METHOD DEFAULT SPECIALIZERS
No changes.
11. REST ARGUMENTS in GENERIC-FUNCTIONS
To add argument "rest" to B15.1-4. To add initarg 'rest to B.11.1.
To add definition of generic-function-rest to B.7. To add definition
of method-rest to B.8.
12. REMOVE: METHOD-LAMBDA-FUNCTION, CALL-METHOD-FUNCTION,
APPLY-METHOD-FUNCTION
No changes.
13. ADDITIONAL SCAN DIRECTIVES
No changes.
14. DOMAIN AND RANGE OF DEEP-COPY AND SHALLOW-COPY
No changes, but see 15.
15. SPECIFICATION OF RESULT TYPES
To add more specific information regarding the class of the result
where approporiate.
16. REMOVE: <abstract-class>
ADD: CLASS OPTION
To remove references to abstract classes (note figure B.1). To add
'abstract class option in B.1.1. To add definition of
abstract-class-p in B.5.
17. ADD: 'required-initargs CLASS OPTION
To add requiredp as a slot option (Table 5) taking a boolean value.
To replace initarg by keyword. To replace initform by default. To
replace initfunction (eg. B.6.4) by default-function.
18. ADD: OPENP
To add definition of openp to A.14.
19. USE 'D' OR 'E' EXPONENT MARKER?
To replace d|D by e|E.
20. MODULE INITIALIZATION ORDER.
To add the wording in the proposal to 9.7.
21. ARGUMENT ORDER TO (SETTER ELEMENT)
No changes.
22. ADD: <collection> AND <sequence> AS ABSTRACT CLASSES
See 29.
23. MAKE thread-start AND thread-value GENERIC
ADD: GENERIC-SIGNAL
ADD: wait METHOD FOR <lock>
To modify 11.1.5-6 to make these function generic and to add default
methods for them. To extend 12.2 with the definition of
signal-using-thread, a generic function, whose first argument is the
thread on which to signal the condition. To modify 12.2.2 to reflect
the use of signal-using-thread. A proposal on the wait method will be
made later by RJB, DDER, NRB and JAP.
24. NON-HYGENIC SEMANTICS FOR MACROS
To expand the description of syntax expansion in 9.7, in particular to
enumerate some of the typical problems stemming from non-hygenic
expansion.
25. STATUS OF and AND or
Partially clarified by 2. Functional versions also to be defined with
the same names as the macros (13.4).
26. CHARACTER NAMING CONVENTIONS
To replace the extended names for special characters (eg. #\newline)
by their string digram equivalents (see table A.7), eg. #\\n.
27. KEYWORDS
To expand the minimal character set (table 2) to be ISO Latin 1. To
add the (concrete) class <keyword>, the abstract class <name> and make
<symbol> a subclass of <name>.
28. WITH-HANDLER
To change the example (fig. 5) to use an externally defined generic
function rather than a dynamically constructed gf and to use
catch/throw instead of let/cc.
29. CLASS HIERARCHY REVISION
To replace figure 3 by the (level 0) hierarchy given in RJB's message
and to add a new figure in Annex B showing the full level 0 and level
1 hierarchy. To note that only abstract classes are subclassable and
that <builtin-class> (abstract) is not subclassable. To rename
<execution-condition> as <general-condition>. To rename
<telos-condition> as <generic-function-condition>. To rename
<syntax-error> as <read-error>. To rename <slot-description> as
<slot> (and all other such references). To remove <structure> and
<structure-class>. To replace defstruct at level-0 by defclass.
Additions to the hierarchy as per RJB/DDERs' diagram.
30. POINTS RAISED BY ULRICH KRIEGEL
arithmetic coercions: to add the generic function lift, to define
methods on it to describe coercion consistent with "floating point"
contagion, except in the case of comparison operators and to describe
its interaction with the n-ary arithmetic operators. (Note: lift is
not to be called by the binary generic operators). To note that
coercion of a <vpi> to a <double-float> may overflow and that case "is
an error". To add <vpi> (<variable-precision-integer>) and all the
necessary methods.
JAP: not clear to me how lift can work given the parenthetical remark
above...unless it takes the operator to be applied as an argument.
Should (+ a b c) ==> (lift a (lift b c binary+) binary+)??
condition class accessors: to define <condition> with two slots,
message and message-args, where message is a format string matching
the message-args. To remove all defined slots in subclasses of
<condition>. To remove defcondition.
31: ADD: N-ARY COMPARATORS
To expand A.3 with >, >=, <= as n-ary functions and != as a binary
function.
32. COLLECTION AND SEQUENCE FUNCTIONS
To add definitions of first, second, last and sort as <sequence>
functions. To add definitions of delete (destructive) and remove
(constructive) as <collection> functions. To add the notion of
explicit keys and to clarify the meaning of operations on infinite
collections. To change the specification of <table> to replace the
initarg fill-value by fill-function, which is a function of two
arguments taking the key and the collection. To add the
class-specific operators: vector-ref, string-ref, list-ref,
hash-table-ref, corresponding setters, vector-length, string-length,
list-length and hash-table-size.
33. PUBLICATION
To add Nitsan, Neil and Odile to the list of contributors. To transfer
Greg from the editors to the contributors.
34. STREAMS
A detailed proposal based on POSIX and buffered I/O will be sent by
HED.
35. FILENAMES
To add the (concrete) class <filename> (where??) with the external
syntax #F"...". To add definitions of the functions: basename, extension,
dirname, device and merge-filenames. To add a converter method from
<string> to <filename>. An additional proposal to add file and
directory operations based on POSIX will be sent by HED.
* proposals
:From: Dave De Roure
>
> 25. STATUS OF and AND or
>
> Partially clarified by 2. Functional versions also to be defined with
> the same names as the macros (13.4).
>
Interesting, quite a compelling example, but in the interest of not having
functions with different semantics and the same names, I propose that if
the functional versions are to be included they should be given different
names; e.g. logical-and (or even logical-and-p!) etc, though maybe that
sounds too much like bitwise operations.
-- Dave
* write-ups from Harley
:From: Juergen Kopp
3. DEFLITERAL
To receive a description from HED for consideration.
6. CASE IN FORMAT DIRECTIVES
Title is unlreated to proposal. To replace format with a collection
of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO
C syntax for format directives extended by %a. To remove scanf. HED
will e-mail a more detailed write-up.
34. STREAMS
A detailed proposal based on POSIX and buffered I/O will be sent by
HED.
35. FILENAMES
To add the (concrete) class <filename> (where??) with the external
syntax #F"...". To add definitions of the functions: basename, extension,
dirname, device and merge-filenames. To add a converter method from
<string> to <filename>. An additional proposal to add file and
directory operations based on POSIX will be sent by HED.
Immediately breaking my own request about grouping topics...this is to
say that all the write-ups from Harley mentioned in the above topics,
plus some additional material on number lifting from Nitsan is
available as compressed ps files from ftp.bath.ac.uk:pub/eulisp.
midge $ ls -l *.ps.gz
-rw-r--r-- 1 masjap 19011 Dec 3 13:01 adv-genarith.ps.gz
-rw-r--r-- 1 masjap 88037 Dec 3 12:57 eulisp-proposals.ps.gz
-rw-r--r-- 1 masjap 22837 Dec 3 12:58 genarith.ps.gz
--Julian (at GMD).
* proposals
:From: Jeff Dalton
> > 25. STATUS OF and AND or
> >
> > Partially clarified by 2. Functional versions also to be defined with
> > the same names as the macros (13.4).
>
> Interesting, quite a compelling example, but in the interest of not having
> functions with different semantics and the same names, I propose that if
> the functional versions are to be included they should be given different
> names; e.g. logical-and (or even logical-and-p!) etc, though maybe that
> sounds too much like bitwise operations.
For what it's worth, T has *AND, *OR, and *IF.
-- jeff
* signal
:From: E. Ulrich Kriegel
Hi Julian,
on page 24 it is stated signal should never return.
But what about if one thread signals a condition to another thread.
page19: A signal on a determined thread has no discernable effect on
either the signalled or signalling thread ...
Is it correct, if we understand it in the following sense:
if no thread is given signal should never return?
greetings
--ulrich
* write-ups from Harley
:From: Juergen Kopp
3. DEFLITERAL
To receive a description from HED for consideration.
6. CASE IN FORMAT DIRECTIVES
Title is unlreated to proposal. To replace format with a collection
of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO
C syntax for format directives extended by %a. To remove scanf. HED
will e-mail a more detailed write-up.
34. STREAMS
A detailed proposal based on POSIX and buffered I/O will be sent by
HED.
35. FILENAMES
To add the (concrete) class <filename> (where??) with the external
syntax #F"...". To add definitions of the functions: basename, extension,
dirname, device and merge-filenames. To add a converter method from
<string> to <filename>. An additional proposal to add file and
directory operations based on POSIX will be sent by HED.
Immediately breaking my own request about grouping topics...this is to
say that all the write-ups from Harley mentioned in the above topics,
plus some additional material on number lifting from Nitsan is
available as compressed ps files from ftp.bath.ac.uk:pub/eulisp.
midge $ ls -l *.ps.gz
-rw-r--r-- 1 masjap 19011 Dec 3 13:01 adv-genarith.ps.gz
-rw-r--r-- 1 masjap 88037 Dec 3 12:57 eulisp-proposals.ps.gz
-rw-r--r-- 1 masjap 22837 Dec 3 12:58 genarith.ps.gz
--Julian.
* write-ups from Harley
:From: Jeff Dalton
> 6. CASE IN FORMAT DIRECTIVES
>
> Title is unlreated to proposal. To replace format with a collection
> of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO
> C syntax for format directives extended by %a. To remove scanf. HED
> will e-mail a more detailed write-up.
This strikes me as a totally bizarre move, as if people would be
converting C programs directly to EuLisp if only an incompatible
format syntax didn't get in the way.
So, in this spirit, let me repeat my suggestion on the Dylan list that
the syntax of (car z) should be z->p.car ("p" for "pair", you see).
Furthermore, we should distinguish between FILE *s and fds, should
replace the AND and OR macros by && and || respectively, and should
remove dangerous orthogonality by distinguishing between statements
and expressions. Strings should be removed from the language, and
programmers should be required to specify an explicit size whenever
they want to, say, concat the referents of two char *s. Don't
forget to call free when you're done.
Several other randomly selected C features should be added as well,
to complete the transformation; then we can take, say, one additional
feature from C++, such as abandoning object identity when there's
multiple-inheritance.
Once we've made these important changes, we should immediately
publish the 1.0 definition, since the main obstacles to the wide
acceptance of EuLisp will have been removed.
-- jd
* write-ups from Harley
:From: Harley Davis
Date: Fri, 3 Dec 93 17:33:19 GMT
:From: Jeff Dalton
> 6. CASE IN FORMAT DIRECTIVES
>
> Title is unlreated to proposal. To replace format with a collection
> of printf functions (fprintf, sprintf, eprintf, printf) and adopt ISO
> C syntax for format directives extended by %a. To remove scanf. HED
> will e-mail a more detailed write-up.
This strikes me as a totally bizarre move, as if people would be
converting C programs directly to EuLisp if only an incompatible
format syntax didn't get in the way.
So, in this spirit, let me repeat my suggestion on the Dylan list that
the syntax of (car z) should be z->p.car ("p" for "pair", you see).
Furthermore, we should distinguish between FILE *s and fds, should
replace the AND and OR macros by && and || respectively, and should
remove dangerous orthogonality by distinguishing between statements
and expressions. Strings should be removed from the language, and
programmers should be required to specify an explicit size whenever
they want to, say, concat the referents of two char *s. Don't
forget to call free when you're done.
Several other randomly selected C features should be added as well,
to complete the transformation; then we can take, say, one additional
feature from C++, such as abandoning object identity when there's
multiple-inheritance.
Once we've made these important changes, we should immediately
publish the 1.0 definition, since the main obstacles to the wide
acceptance of EuLisp will have been removed.
If you look at the FORMAT function in EuLisp, you will see that it's
not much more powerful than printf. However, it does introduce a new
set of directives which will have to be learned by programmers coming
from outside Lisp, and they will not see the point. In addition,
FORMAT has to be implemented independently of an already existing and
standard set of functions, while a printf in EuLisp can be implemented
in part using sprintf, especially for annoying floating point
formatting. This will reduce application size and promote integration
and output consistency between mixed Lisp/C applications. For
formatted output, we can either try to be slightly consistent with
CommonLisp or with C. I really don't see why we would choose
CommonLisp in this case, since the subset we already chose isn't
obviously better than what C provides.
Also, the printf we're proposing is somewhat better than in C since it
does error checking and won't give you a segmentation violation if you
screw up the types or the number of arguments.
Perhaps you could answer the question "Why not use printf instead of
inventing a new sublanguage which is only familiar to small minority
of programmers?"
Look, nobody's going to propose abandoning Lisp in favor of C
semantics. However, the non-Lisp world does have a certain number of
standards for which we can only provide incremental improvements. Why
not use them?
-- Harley
* write-ups from Harley
:From: Jeff Dalton
> If you look at the FORMAT function in EuLisp, you will see that it's
> not much more powerful than printf. However, it does introduce a new
> set of directives which will have to be learned by programmers coming
> from outside Lisp, and they will not see the point.
Give me a break! Every time someone moves from one language to
another they have to learn some different ways of doing the same
thing. Programmers can deal with this. But no! When it comes to
format, suddenly it's all too much. "I don't see the point!", they
cry. "What a ridiculous imposition!" "That's right!, the answer
comes. "Those bastards! I will never use their language, the scum!"
"Yeah! They can't get away with this. We'll show them!"
> In addition,
> FORMAT has to be implemented independently of an already existing and
> standard set of functions,
Haven't you noticed, Harley? Printf doesn't have a clue about
outputting Lisp data. The only way to make it compatible is to
make it too wimpy to use.
> while a printf in EuLisp can be implemented
> in part using sprintf, especially for annoying floating point
> formatting.
Franz Lisp used to do that, and it didn't call the Lisp function
printf.
> This will reduce application size
No it won't. (See Franz.)
> and promote integration
> and output consistency between mixed Lisp/C applications.
So would 1000 other things which we're not going to do. Picking
this one thing seems completely off the wall to me. Especially now.
We had years in which to do this, if it was so important.
> For formatted output, we can either try to be slightly consistent with
> CommonLisp or with C. I really don't see why we would choose
> CommonLisp in this case, since the subset we already chose isn't
> obviously better than what C provides.
An attempt to exploit anti-Common lisp feeling will get nowhere with
me, as you must know. Besides, format predated Common Lisp.
In any case, you're not even talking about redoing I/O to be compatible
with C generally, you're talking about a minute part of the language
that won't be fully compatible in any case.
It looks like C++ has got Lisp folk running so scared that they're
starting to consider irrational ways of attracting C progrmmers.
If you want to make a case that certain small changes will make
a big difference, you should show that this really is the case and
indentify the changes that will do the trick.
> Perhaps you could answer the question "Why not use printf instead of
> inventing a new sublanguage which is only familiar to small minority
> of programmers?"
This minute sublanguage, in a familiar form, will present little
problem to programmers. if you have some evidence that calling
it printf will make a huge difference, let's have it.
> Look, nobody's going to propose abandoning Lisp in favor of C
> semantics. However, the non-Lisp world does have a certain number of
> standards for which we can only provide incremental improvements. Why
> not use them?
Printf is not a standard for any language but C. If you can make a
case that this particular change will make a big difference, I'll
be happy to make it. But don't tell me there's a general non-Lisp-World
standard that we ought to respect, and that the burden of proof is
therefore on me, because it's just not so.
If you want to compete with C and C++, design a better language than
C or C++. If you have to resort to removing micro-irritants, you've
alreay lost.
-- jd
* proposals (AND and OR)
:From: jpff
Message written at Fri Dec 3 21:13:50 GMT 1993
I do agree with Dave that having functions and macros with the same name
does sound like being deliberately perverse. The names "logical-and"
are the ones I am used to as bitwise -- I realise this is not very
logical.....
==John
* write-ups from Harley
:From: Harley Davis
Date: Sun, 5 Dec 93 23:58:13 GMT
:From: Jeff Dalton
> If you look at the FORMAT function in EuLisp, you will see that it's
> not much more powerful than printf. However, it does introduce a new
> set of directives which will have to be learned by programmers coming
> from outside Lisp, and they will not see the point.
Give me a break! Every time someone moves from one language to
another they have to learn some different ways of doing the same
thing. Programmers can deal with this. But no! When it comes to
format, suddenly it's all too much. "I don't see the point!", they
cry. "What a ridiculous imposition!" "That's right!, the answer
comes. "Those bastards! I will never use their language, the scum!"
"Yeah! They can't get away with this. We'll show them!"
It's getting somewhat difficult to discuss this in a rational way when
you blow up at any response. This discussion would be easier if you
would just present your arguments without exagerating the other side
and inventing straw men.
In any case, I think you have misunderstood the point of this
proposal. It is not meant to attract C/C++ programmers away from
C/C++. It is meant to recognize the fact that Lisp's role in the
future will necessarily be as a complement to C/C++ -- indeed, that is
already the case today -- and so we should, wherever possible,
simplify the life of the programmer who will use both languages
together.
If you look at the entire set of proposals which I sent to Julian and
which are available by ftp from Bath, you will see not just printf but
also a new proposal for filenames, file operations, and streams.
These proposals are based on POSIX file operations and the stream
system is meant to be compatible with POSIX buffered stream
operations, with the addition of a higher level of functionality for
reading and printing Lisp objects. (If you had read the printf
proposal, you would have seen that the proposed printf also has a new
directive for handling Lisp objects and treats %s reasonably for Lisp
objects.) So printf is not an isolated case.
> In addition,
> FORMAT has to be implemented independently of an already existing and
> standard set of functions,
Haven't you noticed, Harley? Printf doesn't have a clue about
outputting Lisp data. The only way to make it compatible is to
make it too wimpy to use.
Ours does.
> while a printf in EuLisp can be implemented
> in part using sprintf, especially for annoying floating point
> formatting.
Franz Lisp used to do that, and it didn't call the Lisp function
printf.
But *why not* call it printf? What's your argument, Jeff?
> This will reduce application size
No it won't. (See Franz.)
Well, it does in our system. If Franz implemented things in a losing
way, it's not the fault of this specification.
> and promote integration
> and output consistency between mixed Lisp/C applications.
So would 1000 other things which we're not going to do. Picking
this one thing seems completely off the wall to me. Especially now.
We had years in which to do this, if it was so important.
It's not just one thing. If there are other things aside from those
proposed which would help mixed language applications, I would be
interested in hearing about them. I hope other EuLispers would too.
(I don't think adopting C syntax or getting rid of garbage collection
helps anybody.)
> For formatted output, we can either try to be slightly consistent with
> CommonLisp or with C. I really don't see why we would choose
> CommonLisp in this case, since the subset we already chose isn't
> obviously better than what C provides.
An attempt to exploit anti-Common lisp feeling will get nowhere with
me, as you must know. Besides, format predated Common Lisp.
I'm not trying to exploit anti-CommonLisp feeling. I'm simply making
the completely empirical point that more programmers know printf than
format; printf and EuLisp's format are basically equivalent in power;
we should go with what more people know. That's the argument.
In any case, you're not even talking about redoing I/O to be compatible
with C generally, you're talking about a minute part of the language
that won't be fully compatible in any case.
Please read the entire proposal before making such assumptions.
It looks like C++ has got Lisp folk running so scared that they're
starting to consider irrational ways of attracting C progrmmers.
If you want to make a case that certain small changes will make
a big difference, you should show that this really is the case and
indentify the changes that will do the trick.
You are wrong.
> Perhaps you could answer the question "Why not use printf instead of
> inventing a new sublanguage which is only familiar to small minority
> of programmers?"
This minute sublanguage, in a familiar form, will present little
problem to programmers. if you have some evidence that calling
it printf will make a huge difference, let's have it.
A huge difference in what? Again you seem to be assuming that all
this is just some sort of trick to attract C programmers. But that's
just not true.
> Look, nobody's going to propose abandoning Lisp in favor of C
> semantics. However, the non-Lisp world does have a certain number of
> standards for which we can only provide incremental improvements. Why
> not use them?
Printf is not a standard for any language but C. If you can make a
case that this particular change will make a big difference, I'll
be happy to make it. But don't tell me there's a general non-Lisp-World
standard that we ought to respect, and that the burden of proof is
therefore on me, because it's just not so.
Most programmers who will be likely EuLisp users know C already. And
there is a general non-Lisp world standard whose name is POSIX. This
standard does specify a certain number of operations including printf.
If you want to compete with C and C++, design a better language than
C or C++. If you have to resort to removing micro-irritants, you've
alreay lost.
Not compete, co-operate. Competing is a sure way to lose;
co-operating intelligently will provide Lisp its appropriate
ecological niche.
-- Harley
* write-ups from Harley
:From: Jeff Dalton
> It's getting somewhat difficult to discuss this in a rational way when
> you blow up at any response.
You think that's blowing up? You've misunderstood me. Do I
really have to put in funny little character sequences everywhere?
> This discussion would be easier if you
> would just present your arguments without exagerating the other side
> and inventing straw men.
If I've misunderstood ("invented") your arguments, perhaps it's
because they weren't sufficiently clear.
> In any case, I think you have misunderstood the point of this
> proposal. It is not meant to attract C/C++ programmers away from
> C/C++. It is meant to recognize the fact that Lisp's role in the
> future will necessarily be as a complement to C/C++ -- indeed, that is
> already the case today -- and so we should, wherever possible,
> simplify the life of the programmer who will use both languages
> together.
And I still think this is a completely trivial move in that direction.
In any case, you *are* trying to attract C and C++ programmers,
whether "away from C and C++" or not. Moreover, if they use Lisp
at all, they will be moving away from C and C++ to that extent.
Right now, their Lisp usage tends to be zero.
> If you look at the entire set of proposals which I sent to Julian and
> which are available by ftp from Bath, you will see not just printf but
> also a new proposal for filenames, file operations, and streams.
> These proposals are based on POSIX file operations and the stream
> system is meant to be compatible with POSIX buffered stream
> operations, with the addition of a higher level of functionality for
> reading and printing Lisp objects. (If you had read the printf
> proposal, you would have seen that the proposed printf also has a new
> directive for handling Lisp objects and treats %s reasonably for Lisp
> objects.) So printf is not an isolated case.
So you agree that the printf change is not justified on its own,
contrary to how it appeared in your previous message.
I don't mind being compatible with "buffered streams", though one of
the advantages of Lisp used to be that it was not so tied to details
at that level as C. But compatible ought to cover a very wide
range, so why it requires printf is not clear. In any case,
replacing the I/O system at this point requires more in the way
of justification than I have seen. For instance, to what extent
will C++ I/O be able to use EuLisp streams directly?
BTW, I knew it would have a "new" (to C, not to EuLisp) directive for
printing Lisp objects.
> Haven't you noticed, Harley? Printf doesn't have a clue about
> outputting Lisp data. The only way to make it compatible is to
> make it too wimpy to use.
>
> Ours does.
Then it's not the same as the one in C. Only the wimpy one is
the same. If it's "compatible" (which covers a multitude of sins)
you should say how. You should at least check whether the C standard
allows extensions of the required sort.
> > while a printf in EuLisp can be implemented
> > in part using sprintf, especially for annoying floating point
> > formatting.
>
> Franz Lisp used to do that, and it didn't call the Lisp function
> printf.
>
> But *why not* call it printf? What's your argument, Jeff?
If there's no good reason to call it printf, then why do it?
If you propose a change, you ought to accept the burden of proof.
I think calling it printf looks silly, won't impress C programmers,
is a gratuitous change from existing Lisp practice, is inconsistent
with naming and syntax conventions in the rest of EuLisp, and is
being proposed so late in the day that we won't have time to deal
with any unfortunate consequences before we puiblish the definition.
> > This will reduce application size
>
> No it won't. (See Franz.)
>
> Well, it does in our system. If Franz implemented things in a losing
> way, it's not the fault of this specification.
I don't think you're making much effort to understand me. Franz
shows (IMHO) that you can have the same benefits (being discussed at
this point) without making the change you suggest. Therefore the
additional benefits of the change are zero.
> > and promote integration
> > and output consistency between mixed Lisp/C applications.
>
> So would 1000 other things which we're not going to do. Picking
> this one thing seems completely off the wall to me. Especially now.
> We had years in which to do this, if it was so important.
>
> It's not just one thing. If there are other things aside from those
> proposed which would help mixed language applications, I would be
> interested in hearing about them. I hope other EuLispers would too.
I'd be interested in hearing about them, and if they form a coherent
package in which printf makes sense, that will be a strong point in
favor of printf.
> (I don't think adopting C syntax or getting rid of garbage collection
> helps anybody.)
How about FILE *s? :->
> > For formatted output, we can either try to be slightly consistent with
> > CommonLisp or with C. I really don't see why we would choose
> > CommonLisp in this case, since the subset we already chose isn't
> > obviously better than what C provides.
>
> An attempt to exploit anti-Common lisp feeling will get nowhere with
> me, as you must know. Besides, format predated Common Lisp.
>
> I'm not trying to exploit anti-CommonLisp feeling.
Then why present it as a choice between compatibility with CL and
compatibility with C?
> I'm simply making
> the completely empirical point that more programmers know printf than
> format; printf and EuLisp's format are basically equivalent in power;
> we should go with what more people know. That's the argument.
It's a very general argument being applied very selectively.
So the case-specific arguments must be the decisive ones.
What are they?
> In any case, you're not even talking about redoing I/O to be compatible
> with C generally, you're talking about a minute part of the language
> that won't be fully compatible in any case.
>
> Please read the entire proposal before making such assumptions.
Tell me why it's compatible with C in a suffucuently useful sense.
Common Lisp I/O is compatible with C to the extent that it can be
implemented in C and hence can be regarded as an extension.
> It looks like C++ has got Lisp folk running so scared that they're
> starting to consider irrational ways of attracting C progrmmers.
> If you want to make a case that certain small changes will make
> a big difference, you should show that this really is the case and
> indentify the changes that will do the trick.
>
> You are wrong.
About what? That you ought to show that the changes will make a
big difference?
As for the rest, see above. You are trying to attract C and C++
programmers.
> > Perhaps you could answer the question "Why not use printf instead of
> > inventing a new sublanguage which is only familiar to small minority
> > of programmers?"
>
> This minute sublanguage, in a familiar form, will present little
> problem to programmers. if you have some evidence that calling
> it printf will make a huge difference, let's have it.
>
> A huge difference in what? Again you seem to be assuming that all
> this is just some sort of trick to attract C programmers. But that's
> just not true.
Will it make a huge difference in *anything*?
If it's not to attract C++ programmers, why not use the Algol 68 name?
> > Look, nobody's going to propose abandoning Lisp in favor of C
> > semantics. However, the non-Lisp world does have a certain number of
> > standards for which we can only provide incremental improvements. Why
> > not use them?
>
> Printf is not a standard for any language but C. If you can make a
> case that this particular change will make a big difference, I'll
> be happy to make it. But don't tell me there's a general non-Lisp-World
> standard that we ought to respect, and that the burden of proof is
> therefore on me, because it's just not so.
>
> Most programmers who will be likely EuLisp users know C already.
How do you know? Maybe C programmers will want nothing to do with
it.
> And there is a general non-Lisp world standard whose name is POSIX. This
> standard does specify a certain number of operations including printf.
I have used a wide range of programming languages, and none of them
except C use printf. That it's in a lower-level standard is beside
the point. Languages should be independent of operating systems
and the like. Now, if you want to propose that we have all POSIX
calls in a library, maybe that makes sense.
> If you want to compete with C and C++, design a better language than
> C or C++. If you have to resort to removing micro-irritants, you've
> alreay lost.
>
> Not compete, co-operate. Competing is a sure way to lose;
> co-operating intelligently will provide Lisp its appropriate
> ecological niche.
I'm sorry, but if you want C and C++ programmers to use EuLisp
for anything at all you're going to have to compete with the
languages they would use otherwise, namely C and C++. Removing
micro-irritants is not an effective way to do this.
-- jeff
* write-ups from Harley
:From: Richard Tobin
> there is a general non-Lisp world standard whose name is POSIX. This
> standard does specify a certain number of operations including printf.
This seems irrelevant to me. POSIX isn't a standard for Lisp. And
the standard it provides for printf is a standard for printing C data
types. And this seems to provide an argument *against* calling the
Lisp function printf: if an implementation wants to provide access to
the real printf, for use with actual C data, it will have to call it
something else, or else have functions in two modules with the same
name.
Indeed, I would suggest a rule of adopting names that are *different*
from those of any POSIX functions, at least where it's not too
inconvenient.
-- Richard
* write-ups from Harley
:From: Richard Tobin
> You should at least check whether the C standard
> allows extensions of the required sort.
C reserves new %-lower-case-letter specifiers for future use. Other
characters "may be used in extensions". So if we adopt printf, we
should not use %a (maybe %A?).
-- Richard
* proposals (AND and OR)
:From: Jeff Dalton
> Message written at Fri Dec 3 21:13:50 GMT 1993
>
> I do agree with Dave that having functions and macros with the same name
> does sound like being deliberately perverse. The names "logical-and"
> are the ones I am used to as bitwise -- I realise this is not very
> logical.....
>
> ==John
The more I think about this, the more I think the idea of AND and OR
functions is a mistake.
The times when they're the right thing are fairly rare. I don't
think I've ever encountered one. Moreover, there are a number of
other, more general, operations that can easily handle the cases
handled directly by AND and OR functions. (I'm thinking of MEMBER,
SOME, EVERY, various loop constructs, etc.)
However, if names are wanted, how about ALL-TRUE and ALL-FALSE,
with (COMPLEMENT ALL-FALSE) serving as OR? (If we don't have
COMPLEMENT, I think we should. Since we have functional values,
let's take advantage of them.)
-- jeff
* write-ups from Harley
:From: Jeff Dalton
> the standard it provides for printf is a standard for printing C data
> types.
BTW, do we have null-terminated strings in EuLisp? I think it
would be a good idea. Also a way to test for the null char.
-- jeff
* printf
:From: Harley Davis
In article Jeff Dalton writes:
> This discussion would be easier if you
> would just present your arguments without exagerating the other side
> and inventing straw men.
If I've misunderstood ("invented") your arguments, perhaps it's
because they weren't sufficiently clear.
I made the arguments at the EuLisp meeting, where they were generally
accepted. You responded (rather vehemently) to a posting by Julian in
which he merely listed the proposals from the meeting without any
arguments at all. So you can't really complain that the arguments
were insufficiently clear; it's not my fault you weren't at the
meeting, and you never asked for the arguments - you just responded to
what you thought were the arguments.
Now, if you want me to restate the argument as clearly as possible, I
will do so once again. Here it is, as I believe I stated it during
the meeting:
POSIX provides a certain number of services which, as services, are
more or less sufficient for a large number of tasks, and they are
fairly well-known among the programmers who are likely to use EuLisp.
Therefore, when we want to provide a service in EuLisp which has an
analogue in POSIX, we should provide a binding to the equivalent POSIX
functions. In addition, because Lisp programmers expect better error
handling and a simpler interface, we should add in error handling and
take advantage of existing EuLisp types when providing such a binding.
As a concrete example of this reasoning, I propose replacing the
existing EuLisp format (which in any case is basically printf with
renamed directives) with the POSIX printf, plus various improvements.
Additionally, stream and file operations can be based on their POSIX
equivalents. In the case of files, this binding is fairly
straightforward; in the case of streams, it is more complicated
because FILE*'s and fd's are insufficient for a number of reasons (not
least is that they don't support READ/PRINT very well). However, it
is at least possible to have stream operations which are explicitly
buffered in a way compatible with FILE*'s, and in fact we can provide
a reasonable level of genericity in streams by defining a generic
protocol over this buffering. A demonstration specification and
implementation is provided by Ilog Talk, which has taken this
approach.
There is the argument in its complete form. Now you can tell me I am
insufficiently clear, if you think that is the case.
> In any case, I think you have misunderstood the point of this
> proposal. It is not meant to attract C/C++ programmers away from
> C/C++. It is meant to recognize the fact that Lisp's role in the
> future will necessarily be as a complement to C/C++ -- indeed, that is
> already the case today -- and so we should, wherever possible,
> simplify the life of the programmer who will use both languages
> together.
And I still think this is a completely trivial move in that direction.
In any case, you *are* trying to attract C and C++ programmers,
whether "away from C and C++" or not. Moreover, if they use Lisp
at all, they will be moving away from C and C++ to that extent.
Right now, their Lisp usage tends to be zero.
No, I still must disagree. This move is not at all designed to
attract C and C++ programmers (especially the latter). It is designed
to make the language more homogenous with the de facto standards in
the environments in which it will likely be used, and therefore make
life easier for those programmers which have chosen to use it. This
argument makes no reference at all to the reasons why a C/C++
programmer might choose to use EuLisp. In addition to this argument,
I think the general POSIX move does in fact make EuLisp more
attractive to those programmers, and thus as a side-effect can help
attract these programmers, but it also helps those who are primarily
Lisp programmers who in any case also have to use C/C++ for any
serious work.
If you want to know what I think will attract C/C++ programmers, I
would rather cite EuLisp's real advantages: GC, macros, better object
system, interactive environment, etc., which lead to greater
productivity, plus of course its advantages compared other high-level
dynamic languages such as CL, Dylan, Python, Tcl or whatever. If you
as a EuLisp marketer thought that supplying a POSIX binding would
convince some individual, you could also bring that out, but it's not
the primary intention.
Like you, Jeff, I would hope that everything we add to the language is
purely to make it a better language, and not some marketing trick
(like Dylan's alternative syntax). I really do believe that printf is
better than some mutant format which is in any case based on printf.
I also believe that basing functionality on a standard when we can't
do much better is also good for the language since it makes it simpler
to implement and specify and easier to learn.
This would all be different if we had some great, really winning ideas
for streams, filenames, and formatting. But this isn't the case.
(I would also point out in passing that when we introduced scanf you
didn't raise hell. Why not? What if we had proposed printf at that
point? I can only suspect that something non-technical is bothering
you about this idea.)
> If you look at the entire set of proposals which I sent to Julian and
> which are available by ftp from Bath, you will see not just printf but
> also a new proposal for filenames, file operations, and streams.
> These proposals are based on POSIX file operations and the stream
> system is meant to be compatible with POSIX buffered stream
> operations, with the addition of a higher level of functionality for
> reading and printing Lisp objects. (If you had read the printf
> proposal, you would have seen that the proposed printf also has a new
> directive for handling Lisp objects and treats %s reasonably for Lisp
> objects.) So printf is not an isolated case.
So you agree that the printf change is not justified on its own,
contrary to how it appeared in your previous message.
Since you were responding to the proposal, I had assumed that you had
read it in its entiriety and that you had somehow learned of the
general argument behind it and the further ramifications. Apparently
this wasn't the case, so we have to back up.
I don't mind being compatible with "buffered streams", though one of
the advantages of Lisp used to be that it was not so tied to details
at that level as C. But compatible ought to cover a very wide
range, so why it requires printf is not clear. In any case,
replacing the I/O system at this point requires more in the way
of justification than I have seen. For instance, to what extent
will C++ I/O be able to use EuLisp streams directly?
C++ streams are not compatible with EuLisp streams. But so what? I
think C++ streams are losing, and I wouldn't want to propose something
like them for EuLisp.
> Haven't you noticed, Harley? Printf doesn't have a clue about
> outputting Lisp data. The only way to make it compatible is to
> make it too wimpy to use.
>
> Ours does.
Then it's not the same as the one in C. Only the wimpy one is
the same. If it's "compatible" (which covers a multitude of sins)
you should say how. You should at least check whether the C standard
allows extensions of the required sort.
Read the proposal and see how. I don't see whether it matters if the
POSIX standard allows extensions of the required sort or not. We
aren't bound by the standard, but I believe it behooves us to follow
it to the extent that it is reasonable.
> > while a printf in EuLisp can be implemented
> > in part using sprintf, especially for annoying floating point
> > formatting.
>
> Franz Lisp used to do that, and it didn't call the Lisp function
> printf.
>
> But *why not* call it printf? What's your argument, Jeff?
If there's no good reason to call it printf, then why do it?
If you propose a change, you ought to accept the burden of proof.
I think calling it printf looks silly,
I disagree, I think the name by itself makes almost no difference at all.
won't impress C programmers,
disagree; all the C/C++ programmers here think it's good.
is a gratuitous change from existing Lisp practice,
it's not gratuitous
is inconsistent
with naming and syntax conventions in the rest of EuLisp,
(except scanf and the other proposed POSIX bindings)
and is being proposed so late in the day that we won't have time to
deal with any unfortunate consequences before we puiblish the
definition.
How many unfortunate consequences could it have? Check out the
current EuLisp format and tell me how replacing that with the proposed
printf could possibly have unfortunate consequences. They're
basically the same. (OK, there's no equivalent to ~& or ~r in printf.
I can live with it, or they can be added. On the other hand, there's
no way to specify field widths or right justification for certain
directives in the current EuLisp format.) If you trust our experience
with Talk, I can assure you that there are no hidden problems.
> > and promote integration
> > and output consistency between mixed Lisp/C applications.
>
> So would 1000 other things which we're not going to do. Picking
> this one thing seems completely off the wall to me. Especially now.
> We had years in which to do this, if it was so important.
>
> It's not just one thing. If there are other things aside from those
> proposed which would help mixed language applications, I would be
> interested in hearing about them. I hope other EuLispers would too.
I'd be interested in hearing about them, and if they form a coherent
package in which printf makes sense, that will be a strong point in
favor of printf.
You're the one who said there are 1000 other things we could do. So
let's hear about some of them. I already proposed a certain number
which Julian has kindly made ftable.
> (I don't think adopting C syntax or getting rid of garbage collection
> helps anybody.)
How about FILE *s? :->
Streams are better. FILE*'s are losing because they don't do
everything that fd's can do. Fd's are losing because they aren't
objects and they aren't buffered. It's also nice to able to subclass
streams and do Lisp object I/O on them. C++ streams are losing
because they use too much state for controlling anything beyond the
most trivial uses.
> > For formatted output, we can either try to be slightly consistent with
> > CommonLisp or with C. I really don't see why we would choose
> > CommonLisp in this case, since the subset we already chose isn't
> > obviously better than what C provides.
>
> An attempt to exploit anti-Common lisp feeling will get nowhere with
> me, as you must know. Besides, format predated Common Lisp.
>
> I'm not trying to exploit anti-CommonLisp feeling.
Then why present it as a choice between compatibility with CL and
compatibility with C?
Of course, we could be consistent with some language even more
marginal than CL, or (as we did) invent our own incompatible format.
I just assumed that these were bad choices and not worth considering,
but perhaps I was wrong. If someone were to propose a format that was
really, really better than printf, than I would certainly listen.
> I'm simply making
> the completely empirical point that more programmers know printf than
> format; printf and EuLisp's format are basically equivalent in power;
> we should go with what more people know. That's the argument.
It's a very general argument being applied very selectively.
So the case-specific arguments must be the decisive ones.
What are they?
See above.
> In any case, you're not even talking about redoing I/O to be compatible
> with C generally, you're talking about a minute part of the language
> that won't be fully compatible in any case.
>
> Please read the entire proposal before making such assumptions.
Tell me why it's compatible with C in a suffucuently useful sense.
Common Lisp I/O is compatible with C to the extent that it can be
implemented in C and hence can be regarded as an extension.
Here's how:
some_fn(int x)
{
printf("fib(%d) = %d\n", x, fib(x));
}
vs. one of:
(defun some-fn (x)
(printf "fib(%d) = %d\n" x (fib x)))
or
(defun some-fn (x)
(format t "fib(~d) = ~d~%" x (fib x)))
The second Lisp form has no objective advantages over the first. The
first is likely to be understood and appreciated by 95% of
Unix/Windows programmers whatever their background. The second is
understood immediately only by experienced Lispers, who in any case
also understand the first form because they've all programmed in C by
now.
I have used a wide range of programming languages, and none of them
except C use printf. That it's in a lower-level standard is beside
the point. Languages should be independent of operating systems
and the like. Now, if you want to propose that we have all POSIX
calls in a library, maybe that makes sense.
The language *is* independent of OS's and the like. However, we're
talking about the I/O library, which is necessarily tied to some
notion of its operating environment. It just so happens that POSIX
provides such a notion which is both familiar to most programmers and
quite portable across the vast majority of platforms on which EuLisp
will be used.
> If you want to compete with C and C++, design a better language than
> C or C++. If you have to resort to removing micro-irritants, you've
> alreay lost.
>
> Not compete, co-operate. Competing is a sure way to lose;
> co-operating intelligently will provide Lisp its appropriate
> ecological niche.
I'm sorry, but if you want C and C++ programmers to use EuLisp
for anything at all you're going to have to compete with the
languages they would use otherwise, namely C and C++. Removing
micro-irritants is not an effective way to do this.
I completely disagree. It is possible to present Lisp as a complement
to C/C++; in fact, this seems to be by far the most successful way to
present it. It is unnecessary to compete; both languages have a
place. Removing any micro-irritant for which we don't provide an
obvious better alternative also helps, just a little in each case, but
still some. So I repeat: Why not? I have presented a detailed
argument along with a concrete, implemented proposal. It has pleased
both our programmers and the clients we've shown it to. What argument
is there for holding on to the status quo?
-- Harley
* printf
:From: Jeff Dalton
> If I've misunderstood ("invented") your arguments, perhaps it's
> because they weren't sufficiently clear.
>
> I made the arguments at the EuLisp meeting, where they were generally
> accepted. You responded (rather vehemently) to a posting by Julian in
> which he merely listed the proposals from the meeting without any
> arguments at all. So you can't really complain that the arguments
> were insufficiently clear; it's not my fault you weren't at the
> meeting, and you never asked for the arguments - you just responded to
> what you thought were the arguments.
My original message didn't respond to arguments; it responded to a
proposal. My second message responded to things you said in reply,
which seems fair enough to me.
Meetings sometimes make incorrect decisions and sometimes follow a
mistaken line of reasoning. The first few times I heard or read that
format might be changed to printf, I thought it was kind of silly but
didn't think much of it. After all, we've long had \n in there.
When it appeared again in a message from Juergen Kopp, it suddenly
struck me that it was a bizarre change to make, hence my message.
Taking a little bit of C and putting it into EuLisp still seems
bizarre to me, even if it's also a little bit of POSIX.
Now, here is the proposal I've seen:
To replace format with a collection of printf functions (fprintf,
sprintf, eprintf, printf) and adopt ISO C syntax for format directives
extended by %a. To remove scanf. HED will e-mail a more detailed
write-up.
In addition, there's a long write-up from you, which seems to contain
sections of the Ilog Talk documentation. I don't know how much of this
is actually being proposed, but some of it looks like pretty standard
Lisp stuff, while some other parts seems to have efficiency advantages.
The printf change is in a third cartegory that I find more questionable.
I can see the point of being able to call the same routines in several
different languages, but the case for similar routines with the same
name is (shall we say) less clear.
> Now, if you want me to restate the argument as clearly as possible, I
> will do so once again. Here it is, as I believe I stated it during
> the meeting:
>
> POSIX provides a certain number of services which, as services, are
> more or less sufficient for a large number of tasks, and they are
> fairly well-known among the programmers who are likely to use EuLisp.
There's an implicit decision here about who these programmers are.
It looks to me like it's pretty close in extension to "C programmers".
C is the only language I know that uses printf, for example. Moreover,
it doesn't seem to include programmers like me who are far more familiar
with Lisp conventions. Changing the conventions for the directive string
from ~ to % etc will actually discourage me from using Eulisp.
> Therefore, when we want to provide a service in EuLisp which has an
> analogue in POSIX, we should provide a binding to the equivalent POSIX
> functions.
It looks like you're not providing a binding but rather a different
(albeit similar) function.
> In addition, because Lisp programmers expect better error
> handling and a simpler interface, we should add in error handling and
> take advantage of existing EuLisp types when providing such a binding.
> As a concrete example of this reasoning, I propose replacing the
> existing EuLisp format (which in any case is basically printf with
> renamed directives) with the POSIX printf, plus various improvements.
> Additionally, stream and file operations can be based on their POSIX
> equivalents. In the case of files, this binding is fairly
> straightforward; in the case of streams, it is more complicated
> because FILE*'s and fd's are insufficient for a number of reasons (not
> least is that they don't support READ/PRINT very well). However, it
> is at least possible to have stream operations which are explicitly
> buffered in a way compatible with FILE*'s, and in fact we can provide
> a reasonable level of genericity in streams by defining a generic
> protocol over this buffering. A demonstration specification and
> implementation is provided by Ilog Talk, which has taken this
> approach.
If I were using C and Eulisp together, I'd want to know several
things. Can I pass EuLisp strings and streams directly to C
functions? Do Eulisp std{in,out} and C std{in,out} stay in step?
Do they share buffers?
Now it looks to me like you're diverging from the POSIX routines
in some significant ways, thought it's not clear exactly what they
are. But once you diverge, having the same names is a somewhat
mixed blessing.
> There is the argument in its complete form. Now you can tell me I am
> insufficiently clear, if you think that is the case.
It's reasonably clear, but it doesn't look very different from what I
thought the argument was.
> In any case, you *are* trying to attract C and C++ programmers,
> whether "away from C and C++" or not. Moreover, if they use Lisp
> at all, they will be moving away from C and C++ to that extent.
> Right now, their Lisp usage tends to be zero.
>
> No, I still must disagree. This move is not at all designed to
> attract C and C++ programmers (especially the latter). It is designed
> to make the language more homogenous with the de facto standards in
> the environments in which it will likely be used,
But you won't actually conform to the standards.
> and therefore make life easier for those programmers which have
> chosen to use it.
It doesn't make life easier for me. There's a class of programmers,
which doesn't include programmers like me, that's going to get this
easier life. If this isn't designed to attract them, then I'm baffled
as to what aim it has; and I don't see the point in debating whether
it's exactly C and C++ programmers or some similar group. Nonetheless,
printf will be more familiar to C programmers than to anyone else.
> This argument makes no reference at all to the reasons why a C/C++
> programmer might choose to use EuLisp.
Not explicitly; but see my previous paragraph.
> In addition to this argument,
> I think the general POSIX move does in fact make EuLisp more
> attractive to those programmers, and thus as a side-effect can help
> attract these programmers, but it also helps those who are primarily
> Lisp programmers who in any case also have to use C/C++ for any
> serious work.
Well, I often have to use C together with Lisp, and changing the
format conventions makes me less inclined to use EuLisp. But perhaps
I don't use C enough.
> If you want to know what I think will attract C/C++ programmers, I
> would rather cite EuLisp's real advantages: GC, macros, better object
> system, [...]
Ok.
> [...] If you as a EuLisp marketer thought that supplying a
> POSIX binding would convince some individual, you could also bring
> that out, but it's not the primary intention.
It seems to be that the only available plausible aims are to attract
some programmers who would not otherwise use Eulisp or to make life
easier for some people who would use Eulisp anyway. Are you saying
it's the latter?
> Like you, Jeff, I would hope that everything we add to the language is
> purely to make it a better language, and not some marketing trick
I never thought it was a marketing trick. I thought it was being
proposed in good faith. I'm sorry if my presentation made that unclear.
> I really do believe that printf is
> better than some mutant format which is in any case based on printf.
Was it based on printf?
> I also believe that basing functionality on a standard when we can't
> do much better is also good for the language since it makes it simpler
> to implement and specify and easier to learn.
But how much of existing printf can we use? We'll want to print
many things that aren't handled by printf, including numbers of
sorts not known in C.
> This would all be different if we had some great, really winning ideas
> for streams, filenames, and formatting. But this isn't the case.
> (I would also point out in passing that when we introduced scanf you
> didn't raise hell. Why not? What if we had proposed printf at that
> point? I can only suspect that something non-technical is bothering
> you about this idea.)
I always thought that having scanf was kind of silly. I don't know
when it was introduced. Maybe I wasn't there. I didn't even think
much of the printf change at first.
Anyway, I have both technical and nontechnical reservarions, as I've
tried to indicate.
> Since you were responding to the proposal, I had assumed that you had
> read it in its entiriety and that you had somehow learned of the
> general argument behind it and the further ramifications. Apparently
> this wasn't the case, so we have to back up.
Had people read it in its entirety before discussing it at the
meeting? That wasn't my impression, though I admit I have it
second hand. So I assumed that the notes about the meeting
correctly reported what was being proposed and that this didn't
necessarily coincide with your long paper.
> > Haven't you noticed, Harley? Printf doesn't have a clue about
> > outputting Lisp data. The only way to make it compatible is to
> > make it too wimpy to use.
> >
> > Ours does.
>
> Then it's not the same as the one in C. Only the wimpy one is
> the same. If it's "compatible" (which covers a multitude of sins)
> you should say how. You should at least check whether the C standard
> allows extensions of the required sort.
>
> Read the proposal and see how.
Well, according to Richard's message one can extend only via
upper-case letters. He tells me he also raised this in the meeting.
> I don't see whether it matters if the
> POSIX standard allows extensions of the required sort or not. We
> aren't bound by the standard, but I believe it behooves us to follow
> it to the extent that it is reasonable.
If you're talking compatibility, such details matter. The more you
say "compatibility, but differing in lots of details", the less
convincing I find it.
> I think calling it printf looks silly,
>
> I disagree, I think the name by itself makes almost no difference at all.
It makes it look like you're trying to be like C.
> won't impress C programmers,
>
> disagree; all the C/C++ programmers here think it's good.
So C and C++ programmers matter after all, do they?
> is a gratuitous change from existing Lisp practice,
>
> it's not gratuitous
If the name doesn't matter, then changing the name is gratuitous.
> is inconsistent
> with naming and syntax conventions in the rest of EuLisp,
>
> (except scanf and the other proposed POSIX bindings)
Scanf is being eliminated.
> and is being proposed so late in the day that we won't have time to
> deal with any unfortunate consequences before we publish the
> definition.
>
> How many unfortunate consequences could it have?
Maybe it gets in the way of calling the real printf. Maybe it
rules out too many extensions. Maybe POSIX-binding makes us too
OS dependent.
BTW, I seem to recall that C requires n-arg functions to have at
least one required arg. This is the kind of thing that makes me
glad Lisp is further away from the machine and OS. C has been
looking less and less attractive to me of late. Perhaps this has
influenced what I've said.
> If you trust our experience
> with Talk, I can assure you that there are no hidden problems.
That seems reasonable.
> > > and promote integration
> > > and output consistency between mixed Lisp/C applications.
> >
> > So would 1000 other things which we're not going to do. Picking
> > this one thing seems completely off the wall to me. Especially now.
> > We had years in which to do this, if it was so important.
> >
> > It's not just one thing. If there are other things aside from those
> > proposed which would help mixed language applications, I would be
> > interested in hearing about them. I hope other EuLispers would too.
>
> I'd be interested in hearing about them, and if they form a coherent
> package in which printf makes sense, that will be a strong point in
> favor of printf.
>
> You're the one who said there are 1000 other things we could do. So
> let's hear about some of them. I already proposed a certain number
> which Julian has kindly made ftable.
Well, I think many of the 1000 are silly too, since they're just
aiming at syntactic similarity, or undesirable for other reasons.
However, if the aim is to make it easier to work with C, then I think
we ought to take some time to think about it generally. I'd like to
be able to mix Lisp and C procedures on a fairly equal basis, but even
a fairly minimal but standard (in EuLisp) foreign interface might be
worthwhile.
> If someone were to propose a format that was
> really, really better than printf, than I would certainly listen.
Something that can handle Lisp data is already better than, and
different from, printf.
> Tell me why it's compatible with C in a suffucuently useful sense.
> Common Lisp I/O is compatible with C to the extent that it can be
> implemented in C and hence can be regarded as an extension.
>
> Here's how: [parallel Lisp / C code omitted]
> The second Lisp form has no objective advantages over the first. The
> first is likely to be understood and appreciated by 95% of
> Unix/Windows programmers whatever their background. The second is
> understood immediately only by experienced Lispers, who in any case
> also understand the first form because they've all programmed in C by
> now.
You've shown a case (%d vs ~d) that works well. %s works less well,
as does %X (Is it supposaed to be obvioius what this means? How about
%e vs %E?) How do I output a Lisp object that's not handled by C?
How do I control whether the output is readable by READ? There are
many important cases that aren't resolved by analogy with C's printf.
In any case this is a very small part of what programmers have to do
when moving between Lisp and C.
> I have used a wide range of programming languages, and none of them
> except C use printf. That it's in a lower-level standard is beside
> the point. Languages should be independent of operating systems
> and the like. Now, if you want to propose that we have all POSIX
> calls in a library, maybe that makes sense.
>
> The language *is* independent of OS's and the like. However, we're
> talking about the I/O library, which is necessarily tied to some
> notion of its operating environment. It just so happens that POSIX
> provides such a notion which is both familiar to most programmers and
> quite portable across the vast majority of platforms on which EuLisp
> will be used.
I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful
that Lisp is at a sufficiently higher level that similar problems
don't occur (very often).
> I'm sorry, but if you want C and C++ programmers to use EuLisp
> for anything at all you're going to have to compete with the
> languages they would use otherwise, namely C and C++. Removing
> micro-irritants is not an effective way to do this.
>
> I completely disagree. It is possible to present Lisp as a complement
> to C/C++; in fact, this seems to be by far the most successful way to
> present it. It is unnecessary to compete; both languages have a
> place.
By "compete with" I don't mean "totally replace". However, people
do all kinds of things in C (and C++). Things I would do in Lisp.
Companies offer products in or for C/C++ that look more natural to me
as Lisp products. Indeed, C and C++ programmers get by pretty well
without Lisp. To a large extent, getting them to use Lisp for
something involves getting them to not use C or C++.
The only alternative is to get them to do in Lisp things they wouldn't
do at all otherwise. I find people willing to do so much in C and C++
that I think the scope for this is small.
> Removing any micro-irritant for which we don't provide an
> obvious better alternative also helps, just a little in each case, but
> still some.
But sometimes it makes things worse for other people or looks silly
or inconsistent with other things.
> So I repeat: Why not? I have presented a detailed
> argument along with a concrete, implemented proposal. It has pleased
> both our programmers and the clients we've shown it to. What argument
> is there for holding on to the status quo?
See above.
-- jeff
* write-ups from Harley
:From: Harley Davis
Date: Mon, 6 Dec 93 14:31:57 GMT
:From: Jeff Dalton
> the standard it provides for printf is a standard for printing C data
> types.
BTW, do we have null-terminated strings in EuLisp? I think it
would be a good idea. Also a way to test for the null char.
-- jeff
I agree. Of course, we should be careful to also allow the length to
be explicitly coded in the string.
-- Harley
* printf
:From: Harley Davis
> Therefore, when we want to provide a service in EuLisp which has an
> analogue in POSIX, we should provide a binding to the equivalent POSIX
> functions.
It looks like you're not providing a binding but rather a different
(albeit similar) function.
In the mysterious world of inter-language standards, this proposal
certainly counts as a binding. If you doubt it, check out the CORBA
spec and what they count as a language binding. (For example, the
differences between the C/C++/SmallTalk bindings.) Other references
are also available. Basically, "binding" is a pretty loose word.
> In addition, because Lisp programmers expect better error
> handling and a simpler interface, we should add in error handling and
> take advantage of existing EuLisp types when providing such a binding.
> As a concrete example of this reasoning, I propose replacing the
> existing EuLisp format (which in any case is basically printf with
> renamed directives) with the POSIX printf, plus various improvements.
> Additionally, stream and file operations can be based on their POSIX
> equivalents. In the case of files, this binding is fairly
> straightforward; in the case of streams, it is more complicated
> because FILE*'s and fd's are insufficient for a number of reasons (not
> least is that they don't support READ/PRINT very well). However, it
> is at least possible to have stream operations which are explicitly
> buffered in a way compatible with FILE*'s, and in fact we can provide
> a reasonable level of genericity in streams by defining a generic
> protocol over this buffering. A demonstration specification and
> implementation is provided by Ilog Talk, which has taken this
> approach.
If I were using C and Eulisp together, I'd want to know several
things. Can I pass EuLisp strings and streams directly to C
functions?
Do Eulisp std{in,out} and C std{in,out} stay in step?
Do they share buffers?
Since there is no foreign language interface in EuLisp, it's pretty
hard to specify this, no? In Talk, you can pass strings to C (they're
null terminated direct char * pointers), and streams translated to
fd's (was FILE* but it was too limiting). The stdxxx in Talk are
initialized from C, but then go their own way. There is no buffer
sharing (although we did want to, it turned out that the FILE* buffers
aren't sufficiently portably controllable.)
Now it looks to me like you're diverging from the POSIX routines
in some significant ways, thought it's not clear exactly what they
are. But once you diverge, having the same names is a somewhat
mixed blessing.
If you want to go with the sales argument, you can say that EuLisp has
a POSIX binding with improvements, so C programmers using EuLisp have
both a familiar set of functions and a more comfortable environment to
use them.
> and therefore make life easier for those programmers which have
> chosen to use it.
It doesn't make life easier for me. There's a class of programmers,
which doesn't include programmers like me, that's going to get this
easier life. If this isn't designed to attract them, then I'm baffled
as to what aim it has; and I don't see the point in debating whether
it's exactly C and C++ programmers or some similar group. Nonetheless,
printf will be more familiar to C programmers than to anyone else.
I think I explicitly mentioned that this would help a majority of
programmers. Personally, given the choice between helping the
minority of Lisp programmers vs. the vast majority of C/C++
programmers, I prefer the latter. This necessarily means that the
minority is slightly disgruntled. I would have to say, too bad, Jeff,
but I don't really think you would suffer very much, if at all.
Well, I often have to use C together with Lisp, and changing the
format conventions makes me less inclined to use EuLisp. But perhaps
I don't use C enough.
Have you looked at the EuLisp format conventions? They're not
compatible with anything else. Why does a ~ please you more than a %?
> I really do believe that printf is
> better than some mutant format which is in any case based on printf.
Was it based on printf?
Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the
function format:
"These formatting directives are intentionally compatible with the
facilities defined for the function fprintf in ISO/EIC 9899 : 1990."
So EuLisp's format is already printf hiding behind a thin veneer of
pseudo-Lisp compatibility.
> I also believe that basing functionality on a standard when we can't
> do much better is also good for the language since it makes it simpler
> to implement and specify and easier to learn.
But how much of existing printf can we use? We'll want to print
many things that aren't handled by printf, including numbers of
sorts not known in C.
The new directive %A prints all Lisp objects as if printed by prin.
This will obviously handle numbers too.
> won't impress C programmers,
>
> disagree; all the C/C++ programmers here think it's good.
So C and C++ programmers matter after all, do they?
Of course, that's the whole point. I differed with the idea that this
proposal is primarily meant to attract them rather than retain them.
We want to please C/C++ programmers because they represent 90% of
Unix/DOS/Windows programmers.
> is a gratuitous change from existing Lisp practice,
>
> it's not gratuitous
If the name doesn't matter, then changing the name is gratuitous.
Exactly. Why change from the standard printf to bizarre, obscure format?
> How many unfortunate consequences could it have?
Maybe it gets in the way of calling the real printf. Maybe it
rules out too many extensions. Maybe POSIX-binding makes us too
OS dependent.
Unices are almost all POSIX compliant.
Windows NT has a POSIX compliant module (and better ones available
commercially.)
VMS is now POSIX compliant.
Even DOS has a goodly number of the POSIX functions.
What OS are you worried about? Perhaps Mac? What is the situation
there? What about the AS/400? Should we care about that?
BTW, I seem to recall that C requires n-arg functions to have at
least one required arg. This is the kind of thing that makes me
glad Lisp is further away from the machine and OS. C has been
looking less and less attractive to me of late. Perhaps this has
influenced what I've said.
C has never looked particularly attractive to me. In fact, I think it
sucks for almost every program I want to write. Nevertheless, that is
what people use and for certain areas we don't have interfaces that
are much better.
Well, I think many of the 1000 are silly too, since they're just
aiming at syntactic similarity, or undesirable for other reasons.
However, if the aim is to make it easier to work with C, then I think
we ought to take some time to think about it generally. I'd like to
be able to mix Lisp and C procedures on a fairly equal basis, but even
a fairly minimal but standard (in EuLisp) foreign interface might be
worthwhile.
I would be happy to propose a minimal foreign function interface (for
C anyway) if people are interested.
> If someone were to propose a format that was
> really, really better than printf, than I would certainly listen.
Something that can handle Lisp data is already better than, and
different from, printf.
But upwardly compatible with it.
> Tell me why it's compatible with C in a suffucuently useful sense.
> Common Lisp I/O is compatible with C to the extent that it can be
> implemented in C and hence can be regarded as an extension.
>
> Here's how: [parallel Lisp / C code omitted]
> The second Lisp form has no objective advantages over the first. The
> first is likely to be understood and appreciated by 95% of
> Unix/Windows programmers whatever their background. The second is
> understood immediately only by experienced Lispers, who in any case
> also understand the first form because they've all programmed in C by
> now.
You've shown a case (%d vs ~d) that works well. %s works less well,
Converts its argument to a string.
as does %X (Is it supposaed to be obvioius what this means?
What's the problem?
How about %e vs %E?)
Why is %e vs. %E mysterious?
How do I output a Lisp object that's not handled by C?
%A.
How do I control whether the output is readable by READ?
%A.
There are
many important cases that aren't resolved by analogy with C's printf.
But they're in the proposal.
In any case this is a very small part of what programmers have to do
when moving between Lisp and C.
Naturally. Any implementation needs a foreign function interface. As
I said, if people are willing to consider such a thing for the
language (and I think the public would applaud a Lisp with a standard,
even minimal, FFI), I would be happy to start the ball rolling with a
proposal.
> I have used a wide range of programming languages, and none of them
> except C use printf. That it's in a lower-level standard is beside
> the point. Languages should be independent of operating systems
> and the like. Now, if you want to propose that we have all POSIX
> calls in a library, maybe that makes sense.
>
> The language *is* independent of OS's and the like. However, we're
> talking about the I/O library, which is necessarily tied to some
> notion of its operating environment. It just so happens that POSIX
> provides such a notion which is both familiar to most programmers and
> quite portable across the vast majority of platforms on which EuLisp
> will be used.
I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful
that Lisp is at a sufficiently higher level that similar problems
don't occur (very often).
What sorts of OS dependencies do you encounter in C I/O these days?
Is it because you're using non-POSIX functions or options? Do you
program for DOS or Windows 3?
The only alternative is to get them to do in Lisp things they wouldn't
do at all otherwise. I find people willing to do so much in C and C++
that I think the scope for this is small.
Then you think Lisp doesn't have much future?
-- Harley
* The rest of the write-ups from Harley
:From: Jeff Dalton
Harley -- I had a closer look at your big document last night.
Printf is such a small issue that I'm surprised it's generated
such long messages. There are a number of more important issues
in there; it will be interesting to see what happens to them.
After looking again at the whole set of proposals, I'm not longer
so inclined to oppose printf. However, I do have several other
concerns, some minor, others not.
* The buffer operations rely on default handlers. Eulisp doesn't
have them (unless they sneaked in when I wasn't looking). Adding
default handlers is a significant non-local change. I don't think
we should make it without thinking very carefully about the
consequences.
* The rules for defliteral with respect to modules are far too
restrictive. Expressions containing only constants (including
literals defined by defliteral) should be evaluable at compile
time. This is pretty standard practice in compilers and doesn't
bring in the deeper environment issues that have been tied to
using *macros* in the module in which they're defined. Applying
the same restriction to literals creates unnecessary trouble
for users and makes the language look bad.
* The #f syntax is taken for filename objects. I would prefer
that it be available for people who want to follow Scheme
conventions.
* The order of args for merge-filenames is somewhat peculiar.
I find it easier to handle such functions if one arg (the 2nd
in CL) is treated as supplying defaults.
* There's no fdopen (which I would find useful). Most of the
POSIX-derived routines are FILE * routines, but some are
(normally) for fds; so I'm a bit puzzled about what the rules
are.
* There are more POSIX-related routines in here than in the C
standard. If we take the POSIX route, I think we should identify a
core subset and relegate the rest to a POSIX library, if we want
them. This would be level 2 and hence not specified at this time.
-- jeff
* converter & inheritance
:From: Ingo Mohr
During implementation of collection functions a question occurs about the
intention of converters:
[1] Should converters be bound exactly to the class given in (defgeneric
(converter class) ...) or
[2] should subclasses inherit converters of its superclasses ?
Looking at the definition of convert&co (0.99) the answer must be [1]. The
answer [2] is implied indirectly by the (converter <list>): If this
converter is available only for <list> I cannot use it in conjunction with
class-of as in
(convert y (class-of x))
because <list> as an abstract class should never be the result of class-of.
On the other hand it makes no sense to replace (converter <list>) by
(converter <cons>) in the definition because it is useless for conversion
of zero-length sequences.
The same problem occurs if <table> will be an abstract class in the future.
It would be fine if some of you can give me an answer.
Ingo
* converter & inheritance
:From: Harley Davis
In article Ingo Mohr writes:
During implementation of collection functions a question occurs about the
intention of converters:
[1] Should converters be bound exactly to the class given in (defgeneric
(converter class) ...) or
[2] should subclasses inherit converters of its superclasses ?
I remember that we decided at one meeting that the answer is
definitely [2] for exactly the reasons you brought up. I would hope
the definition respects this intention, but it is never guaranteed.
-- Harley
* The rest of the write-ups from Harley
:From: Harley Davis
In article Jeff Dalton writes:
Harley -- I had a closer look at your big document last night.
Printf is such a small issue that I'm surprised it's generated
such long messages. There are a number of more important issues
in there; it will be interesting to see what happens to them.
Well, I'm glad we've moved from confrontation mode to technical
discussion mode!
Your points are all valid concerns. Let's see what there is to say...
* The buffer operations rely on default handlers. Eulisp doesn't
have them (unless they sneaked in when I wasn't looking). Adding
default handlers is a significant non-local change. I don't think
we should make it without thinking very carefully about the
consequences.
This issue has never come up in EuLisp because all of the conditions
defined up to this point are errors, for which we have no need (or
desire) to specify the default handling. These conditions are the
first which do require such a notion. In Talk we have a distinguished
handler function named default-handler which is the called when no
others are applicable (or when a handler method falls through.) Not
only does the existence of this handler cause no particular problems,
but it is quite necessary.
Even without specifying the name of the default handler, I don't see
how specifying default behavior for a condition can cause special
problems. Introducing a special named handler also causes no problems
because of the rule which says that the user cannot define methods for
specified generic functions specializing only on specified classes.
* The rules for defliteral with respect to modules are far too
restrictive. Expressions containing only constants (including
literals defined by defliteral) should be evaluable at compile
time. This is pretty standard practice in compilers and doesn't
bring in the deeper environment issues that have been tied to
using *macros* in the module in which they're defined. Applying
the same restriction to literals creates unnecessary trouble
for users and makes the language look bad.
What you say is entirely true, and we could adopt that approach. In
Talk we chose not to simply because it complicates the explanation of
module processing, and we found that even with our simpleminded module
system people often had some trouble understanding what was going on.
We could also adopt another version of defliteral which only accepts
constant expressions (as outlined above), and then the problem also
doesn't come up. However, I do from time to time write defliterals
with non-constant values where I don't mind evaluating the value in
the compilation environment, so such a restriction would be
occasionally bothersome.
Some people would also be a little surprised if (defliteral x (+ 1 1))
was not a legal literal, since it looks like a constant expression.
For these cases, you would have to explain why the literal needs to be
in a syntax module.
Finally, at least in our experience it is not too troublesome to put
defliterals apart simply because most systems already have one or more
compile-time macro modules so no new module is needed just for literals.
* The #f syntax is taken for filename objects. I would prefer
that it be available for people who want to follow Scheme
conventions.
I have no opinion on this. We just chose #f for obvious reasons and
because we don't give a hoot about Scheme compatibility.
* The order of args for merge-filenames is somewhat peculiar.
I find it easier to handle such functions if one arg (the 2nd
in CL) is treated as supplying defaults.
Perhaps if merge-filenames were called filename+ or something like
that it would be clearer. It is not intended to be equivalent to CL's
merge-pathnames. It is really a concatenation whose intended purpose
is to catenate a directory to a basename and optionally an extension:
(merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e"
This was seen to be the most frequent use of the various pathname
merging functions from Le-Lisp, and so we made this case as easy as
possible.
* There's no fdopen (which I would find useful). Most of the
POSIX-derived routines are FILE * routines, but some are
(normally) for fds; so I'm a bit puzzled about what the rules
are.
OK, this is completely true. As I mentioned briefly in my last
message, we have recently abandoned the FILE* metaphor for a purely
fd-based metaphor. (This, since we wrote the doc.) So we replaced
fopen with open, etc. (Problem: The mode specification with open is
really annoying compared with fopen.) The buffering is then a purely
Lisp-based notion, only conceptually derived from FILE*'s.
* There are more POSIX-related routines in here than in the C
standard. If we take the POSIX route, I think we should identify a
core subset and relegate the rest to a POSIX library, if we want
them. This would be level 2 and hence not specified at this time.
I have no problem with this. Would you like to take a crack at
specifying the core functions?
-- Harley
* Harley wirte-ups: defliteral
:From: Jeff Dalton
> In article Jeff Dalton writes:
>
> Harley -- I had a closer look at your big document last night.
> Printf is such a small issue that I'm surprised it's generated
> such long messages. There are a number of more important issues
> in there; it will be interesting to see what happens to them.
>
> Well, I'm glad we've moved from confrontation mode to technical
> discussion mode!
Ah, but you haven't seen my next message yet.
> Your points are all valid concerns. Let's see what there is to say...
> * The rules for defliteral with respect to modules are far too
> restrictive. Expressions containing only constants (including
> literals defined by defliteral) should be evaluable at compile
> time. [...]
> What you say is entirely true, and we could adopt that approach.
Let's then. But is it clear what "that approach" is? (See below.)
> In Talk we chose not to simply because it complicates the explanation of
> module processing, and we found that even with our simpleminded module
> system people often had some trouble understanding what was going on.
But the workaround (using a defglobal instead of or paired with a
defliteral) is tricky too.
[I'm referring here to the part of the paper that suggests that
when you want something like this:
(defliteral %pi% 3.14...)
(defliteral %pi%/2 (/ %pi% 2))
which is illegal (!), you can get around this by writing:
(defglobal :pi 3.14 ...)
(defliteral %pi% :pi)
(defliteral %pi%/2 (/ :pi 2))
]
> We could also adopt another version of defliteral which only accepts
> constant expressions (as outlined above), and then the problem also
> doesn't come up. However, I do from time to time write defliterals
> with non-constant values where I don't mind evaluating the value in
> the compilation environment, so such a restriction would be
> occasionally bothersome.
>
> Some people would also be a little surprised if (defliteral x (+ 1 1))
> was not a legal literal, since it looks like a constant expression.
> For these cases, you would have to explain why the literal needs to be
> in a syntax module.
(+ 1 1) is an expression containing only constants in the sense that
I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral.
> Finally, at least in our experience it is not too troublesome to put
> defliterals apart simply because most systems already have one or more
> compile-time macro modules so no new module is needed just for literals.
That's fine when the expressions don't refer to other literals.
Needing a module for each level makes this whole approach to modules
look wrong.
-- jeff
* Harley write-ups: etc
:From: Jeff Dalton
> * The #f syntax is taken for filename objects. I would prefer
> that it be available for people who want to follow Scheme
> conventions.
>
> I have no opinion on this. We just chose #f for obvious reasons and
> because we don't give a hoot about Scheme compatibility.
If we called them "pathnames", we could use #p, I suppose.
KCL uses #"...".
But do we have readtables in EuLisp these days? This is a good case
for testing whether our system is sufficiently flexible, because
someone might want Scheme syntax in some modules but not in others.
> * The order of args for merge-filenames is somewhat peculiar.
> I find it easier to handle such functions if one arg (the 2nd
> in CL) is treated as supplying defaults.
>
> Perhaps if merge-filenames were called filename+ or something like
> that it would be clearer. It is not intended to be equivalent to CL's
> merge-pathnames. It is really a concatenation whose intended purpose
> is to catenate a directory to a basename and optionally an extension:
>
> (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e"
>
> This was seen to be the most frequent use of the various pathname
> merging functions from Le-Lisp, and so we made this case as easy as
> possible.
I'm used to the default idea. It doesn't have to be compatible
with CL's merge-pathnames. Maybe this is OK too, but if so I
think the documentation could do more to suggest a "model"
for understanding what roles the arguments play. The defaulting
idea is a model I find helpful.
Anyway, what do other people think?
> * There's no fdopen (which I would find useful). Most of the
> POSIX-derived routines are FILE * routines, but some are
> (normally) for fds; so I'm a bit puzzled about what the rules
> are.
>
> OK, this is completely true. As I mentioned briefly in my last
> message, we have recently abandoned the FILE* metaphor for a purely
> fd-based metaphor. (This, since we wrote the doc.) So we replaced
> fopen with open, etc. (Problem: The mode specification with open is
> really annoying compared with fopen.) The buffering is then a purely
> Lisp-based notion, only conceptually derived from FILE*'s.
Humm. I'm not sure what all the implications of this are.
If I can use printf in both Lisp and C, I'd expect the output to
go to the same place and be interleaved in the obvious way.
Indeed, it's a pain when using C with some Lisps that output
to the same destination is independent.
> * There are more POSIX-related routines in here than in the C
> standard. If we take the POSIX route, I think we should identify a
> core subset and relegate the rest to a POSIX library, if we want
> them. This would be level 2 and hence not specified at this time.
>
> I have no problem with this. Would you like to take a crack at
> specifying the core functions?
Can someone say what C specifies? (Richard?)
-- jeff
* Harley write-ups: default handlers
:From: Jeff Dalton
> Your points are all valid concerns. Let's see what there is to say...
>
> * The buffer operations rely on default handlers. Eulisp doesn't
> have them (unless they sneaked in when I wasn't looking). Adding
> default handlers is a significant non-local change. I don't think
> we should make it without thinking very carefully about the
> consequences.
>
> This issue has never come up in EuLisp because all of the conditions
> defined up to this point are errors, for which we have no need (or
> desire) to specify the default handling.
On the contrary, the issue has come up several time and I have
always (successfully) opposed it.
However, my main concern is not default handlers per se but rather
an idea that default handlers encourage, namely the idea that certain
types of conditions are -- by virtue of being that type -- continuable
to not. I think this should be a property of how the condition is
signalled, not of the type. Programmers should be free to signal
the most appropriate condition without being discouraged by the
prospect of writing a recovery handler for it. This is one of the
considerations that went into the design of the CL condition system,
and I think it's right.
That is, I think we should make signalling independent of any
provision for recovery. Default handlers encourage a different
way of thinking in which the signaller has to know what the
default handler does and be prepared to deal with it.
Default handlers also, of course, have the usual problems of such global
arrangements, that when different parts of a system have different
requirements it's difficult for them to fit together.
> These conditions are the first which do require such a notion.
The requirement is at least fairly subtle. Why can't an
appropriate generic be called directly?
> In Talk we have a distinguished
> handler function named default-handler which is the called when no
> others are applicable (or when a handler method falls through.) Not
> only does the existence of this handler cause no particular problems,
> but it is quite necessary.
In Eulisp, we were careful to define signalling is such a way
that it didn't require a default handler to give the default
"no handler" behavior; instead a no-handler function (perhaps a
nominal function rather than one actually in the language) was
called when no handler handled the condition. (This includes
the case where no handler exists, of course.)
Maybe this isn't obvious now, but it was explicitly an aim of this
definition to avoid bringing default handlers into the picture.
-- jeff
* Harley write-ups: etc
:From: Richard Tobin
> I'm used to the default idea.
> Anyway, what do other people think?
Being used to the C/Unix way of doing things, I find Harley's version
clearer (expecially if it has a better name).
BTW, does the filename stuff allow for HTTP URLs? These look like
http://host[:port]/path/path/...
> Humm. I'm not sure what all the implications of this are.
> If I can use printf in both Lisp and C, I'd expect the output to
> go to the same place and be interleaved in the obvious way.
> Indeed, it's a pain when using C with some Lisps that output
> to the same destination is independent.
This is a problem. It could of course be overcome by an
implementation providing its own replacement for the C stdio library
that used the same buffers as Lisp (and there are several free
versions available), but this is not ideal. If C and Lisp use the
same file descriptor but different buffers, they have to be sure to
call fflush() sufficiently often.
> > I have no problem with this. Would you like to take a crack at
> > specifying the core functions?
I agree that it needs to be cut down. It would be bizarre if EuLisp
specified more of this than C does!
> Can someone say what C specifies? (Richard?)
Ok (grouping corresponds to that in the C standard):
remove(), rename(), tmpfile(), tmpnam()
fclose(), fflush(), fopen(), freopen(), setbuf(), setvbuf()
fprintf(), fscanf(), printf(), scanf(), sprintf(), sscanf(),
vfprintf(), vprintf(), vsprintf()
fgetc(), fgets(), fputc(), fputs(), getc(), getchar(), gets(),
putc(), putchar(), puts(), ungetc(),
fread(), fwrite()
fgetpos(), fseek(), fsetpos(), ftell(), rewind()
clearerr(), feof(), ferror(), perror()
The v*printf() functions are irrelevant (we have rest lists), as are
the f* variants of getc() etc (which are just non-macro variants).
In addition, we probably *do* want to specify the fillbuf/flushbuf
functions (I don't have a POSIX document, but I assume it doesn't).
Whether we want set[v]buf depends on how we do that.
We certainly don't want tcgetattr() and other such POSIXisms.
-- Richard
* printf
:From: Jeff Dalton
> It looks like you're not providing a binding but rather a different
> (albeit similar) function.
>
> In the mysterious world of inter-language standards, this proposal
> certainly counts as a binding. If you doubt it, check out the CORBA
> spec and what they count as a language binding.
A binding to X ought to be something that refers to X, not something
that has the same name a something that refers to X but which actually
refers to X' or maybe Z. Very strange.
> If I were using C and Eulisp together, I'd want to know several
> things. Can I pass EuLisp strings and streams directly to C
> functions?
> Do Eulisp std{in,out} and C std{in,out} stay in step?
> Do they share buffers?
>
> Since there is no foreign language interface in EuLisp, it's pretty
> hard to specify this, no?
Sure. But if we're going down this road of making it easier for
EuLisp and C to work together, this is the sort of thing that has
to be addressed. My objection to the printf proposal was, in a
sense, that it was taking a few steps in that direction without
really taking it seriously.
> (... it turned out that the FILE* buffers
> aren't sufficiently portably controllable.)
(This is one reason why C I/O is full of annoying special cases
and references to std io lib internals that differ from implementation
to, lose, lose.)
> Now it looks to me like you're diverging from the POSIX routines
> in some significant ways, thought it's not clear exactly what they
> are. But once you diverge, having the same names is a somewhat
> mixed blessing.
>
> If you want to go with the sales argument, you can say that EuLisp has
> a POSIX binding with improvements, so C programmers using EuLisp have
> both a familiar set of functions and a more comfortable environment to
> use them.
That makes sense. We can say: see how nice even printf would be
if it wasn't embedded in a language that can just barely deal with
n-arg functions, or something along those lines.
> > I really do believe that printf is
> > better than some mutant format which is in any case based on printf.
>
> Was it based on printf?
>
> Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the
> function format:
>
> "These formatting directives are intentionally compatible with the
> facilities defined for the function fprintf in ISO/EIC 9899 : 1990."
I think it's pretty clear that the EuLisp format function is
based on MacLisp / CL format, though no doubt we've also held
C in mind. (Hence the \n.) Indeed, this is so clear that I
thought you must be saying the original format was based on printf.
> The new directive %A prints all Lisp objects as if printed by prin.
> This will obviously handle numbers too.
But will the C directives handle these things? That was my point.
In addition, I need two Lisp data directives: one the prints escape
chars, quote marks around strings, etc; and one that doesn't.
> So C and C++ programmers matter after all, do they?
>
> Of course, that's the whole point. I differed with the idea that this
> proposal is primarily meant to attract them rather than retain them.
Are you serious?! *That*'s what it comes down to? The difference
between attract and retain? (As if you might retain while repelling,
say.)
Maybe Ilog has C++ programmer it wants to retain, but EuLisp doesn't.
> If the name doesn't matter, then changing the name is gratuitous.
>
> Exactly. Why change from the standard printf to bizarre, obscure format?
Why change EuLisp? That's the gratuitious change I'm talking about.
> > How many unfortunate consequences could it have?
>
> Maybe it gets in the way of calling the real printf. Maybe it
> rules out too many extensions. Maybe POSIX-binding makes us too
> OS dependent.
>
> Unices are almost all POSIX compliant.
> Windows NT has a POSIX compliant module (and better ones available
> commercially.)
> VMS is now POSIX compliant.
> Even DOS has a goodly number of the POSIX functions.
Is this suppose to show it doesn't get in the way of calling the
real printf or that it doesn't rule out too many extentsions?
Or even that it doesn't bring us too close to the OS? It's
not just a matter of portability, it's also having losing OS
features intrude into the language.
> I would be happy to propose a minimal foreign function interface (for
> C anyway) if people are interested.
I am, and there are some people on the net who would be too, because
they've been complaining about the lack of exactly that. From time
to time, I try to point such people at EuLisp, and I'd like to be
able to point them more effectively. Moreover I'm all the time
needing to call C things myself, although I call things like
waitpid which don't have the most accommodating interface.
> But upwardly compatible with it.
Depends. Lower case %a isevidently not an extension allowed by the
standard.
> You've shown a case (%d vs ~d) that works well. %s works less well,
>
> Converts its argument to a string.
Etc.
Perhaps you misunderstood me. %d and ~d makes it look like a EuLisp
printf is a good idea, instantly understandable to both Lisp and C
programmers. But that's the best example. There are others that
don't make things look so nice. %s and ~s do different things,
for instance. %X has a meaning that Lisp programmers won't expect,
thought they might get a clue from %e vs %E. And so on.
> There are
> many important cases that aren't resolved by analogy with C's printf.
>
> But they're in the proposal.
Beside the point. Programmers won't be able to say "Humm. By
analogy with C printf, it probably works like this."
> In any case this is a very small part of what programmers have to do
> when moving between Lisp and C.
>
> Naturally. Any implementation needs a foreign function interface. As
> I said, if people are willing to consider such a thing for the
> language (and I think the public would applaud a Lisp with a standard,
> even minimal, FFI), I would be happy to start the ball rolling with a
> proposal.
I'd like to see a proposal with the POSIX stuff simplified and an FFI
added.
> I'm often annoyed by OS dependencies in C I/O and am repeatedly thankful
> that Lisp is at a sufficiently higher level that similar problems
> don't occur (very often).
>
> What sorts of OS dependencies do you encounter in C I/O these days?
Code that assumes the internals of a particular stdio lib
implementation. Code that assumes Sys V system calls. Code
that
> Is it because you're using non-POSIX functions or options?
It's chiefly because other people can't or didn't write portable code.
> Do you program for DOS or Windows 3?
I use only Unix, and stick to BSD when I can. The whole Sys V
excursion was a big mistake that will give the world to Microsoft.
Making the same kind of mistake w.r.t. C++ will be a disaster too.
(The mistake is making something have a major impact by thinking
it will have a major impact and then acting accordingly, thus
bringing about a major impact that could otherwise have been
avoided. Meanwhile, some other folk, not so distracted, do
something else that wins.)
-- jeff
* Harley write-ups: default handlers
:From: Harley Davis
However, my main concern is not default handlers per se but rather
an idea that default handlers encourage, namely the idea that certain
types of conditions are -- by virtue of being that type -- continuable
to not. I think this should be a property of how the condition is
signalled, not of the type. Programmers should be free to signal
the most appropriate condition without being discouraged by the
prospect of writing a recovery handler for it. This is one of the
considerations that went into the design of the CL condition system,
and I think it's right.
I think it's not right for all conditions. For instance, the
<fill-buffer>/<flush-buffer> conditions are just inherently
continuable. (Or, if you really insist, they are always signaled
continuably since the system signals them that way.)
This case is somewhat different from your point since programmers are
never supposed to signal the conditions themselves.
That is, I think we should make signalling independent of any
provision for recovery. Default handlers encourage a different
way of thinking in which the signaller has to know what the
default handler does and be prepared to deal with it.
What's wrong with this way of thinking for these conditions?
Default handlers also, of course, have the usual problems of such global
arrangements, that when different parts of a system have different
requirements it's difficult for them to fit together.
If there is one default handler which is a unique object I don't see
how this problem arises.
> These conditions are the first which do require such a notion.
The requirement is at least fairly subtle. Why can't an
appropriate generic be called directly?
We wanted to support two ways to extend streams:
1. When designing a new class of streams, by writing a method on the
gf fill-buffer (or flush-buffer) you describe its interaction with
the rest of the system. Example: the vector-string-stream example
in the proposal.
2. You can dynamically control all streams by wrapping a handler which
adds or modifies the behavior of these conditions. Example: the
line counting example in the proposal. Another example is the
Le-Lisp printy-printer which uses a clever hack involving the
<flush-buffer> condition to know when to put an expression on one
line and when to split it up.
The Le-Lisp experience showed that both dimensions of extensions are
desirable. We could, of course, just say "tough" to the second type
of extension and get rid of the conditions. I think this would be
sad, especially since it's quite difficult to subclass file streams.
> In Talk we have a distinguished
> handler function named default-handler which is the called when no
> others are applicable (or when a handler method falls through.) Not
> only does the existence of this handler cause no particular problems,
> but it is quite necessary.
In Eulisp, we were careful to define signalling is such a way
that it didn't require a default handler to give the default
"no handler" behavior; instead a no-handler function (perhaps a
nominal function rather than one actually in the language) was
called when no handler handled the condition. (This includes
the case where no handler exists, of course.)
Maybe this isn't obvious now, but it was explicitly an aim of this
definition to avoid bringing default handlers into the picture.
Well, I can only say that we have them, like them, and use them all
the time. What exactly is the argument for the idea that no condition
class is ever inherently continuable? (In Talk, continuability is
always a question of the protocol between the signaler and the handler
for a given condition class. It is not an explicit notion.)
-- Harley
* Harley write-ups: default handlers
:From: Jeff Dalton
> The <fill-buffer>/<flush-buffer> conditions are just inherently
> continuable. (Or, if you really insist, they are always signaled
> continuably since the system signals them that way.)
The names suggest requests rather than conditions. <buffer-empty>
sounds more like a condition, and I don't see why it has to be
continuable.
The line counting and dimilar examples came up before. If they
require inherently continuable conditions, I'd rather lose the
examples.
I think this is an issue we considered several times before and
we stayed with the approach we have now. I don't think we should
overturn all that as a side effect of adopting a stream proposal.
I also don't want to have to spend a lot of time defending those
decisions. Is it as essentail part of the stream proposal?
> That is, I think we should make signalling independent of any
> provision for recovery. Default handlers encourage a different
> way of thinking in which the signaller has to know what the
> default handler does and be prepared to deal with it.
>
> What's wrong with this way of thinking for these conditions?
Why should I have to provide a way to continue? If you want to define
a protocol in which a way to continue is normally provided, that may
be ok. But if it becomes a property of the type, then pretty soon
every case where we can think of a useful way to continue will start
requiring this. On bad consequence of this, indicated in my earlier
message, is that programmers will signal a less appropriate condition
just to avoid dealing with continuing.
I also think it's conceptually cleaner if we separate signalling
from providing ways to recover. Our system already departs from this
by having a continue arg to signal (at least that's what I remember).
I can live with that, but I don't want to go further.
> Default handlers also, of course, have the usual problems of such global
> arrangements, that when different parts of a system have different
> requirements it's difficult for them to fit together.
>
> If there is one default handler which is a unique object I don't see
> how this problem arises.
What do you mean? Different parts may want different defaults.
> > These conditions are the first which do require such a notion.
>
> The requirement is at least fairly subtle. Why can't an
> appropriate generic be called directly?
>
> We wanted to support two ways to extend streams:
>
> 1. When designing a new class of streams, by writing a method on the
> gf fill-buffer (or flush-buffer) you describe its interaction with
> the rest of the system. Example: the vector-string-stream example
> in the proposal.
>
> 2. You can dynamically control all streams by wrapping a handler which
> adds or modifies the behavior of these conditions. Example: the
> line counting example in the proposal. Another example is the
> Le-Lisp printy-printer which uses a clever hack involving the
> <flush-buffer> condition to know when to put an expression on one
> line and when to split it up.
Since 2 doesn't seem to be an essential part of the stream proposal,
I'd like to drop it. I think it brings in too many difficult issues
too late in the day.
> The Le-Lisp experience showed that both dimensions of extensions are
> desirable. We could, of course, just say "tough" to the second type
> of extension and get rid of the conditions. I think this would be
> sad, especially since it's quite difficult to subclass file streams.
I would think that one of our aims would be to provide file streams
that were reasonably easy to subclass.
> Maybe this isn't obvious now, but it was explicitly an aim of this
> definition to avoid bringing default handlers into the picture.
>
> Well, I can only say that we have them, like them, and use them all
> the time.
> What exactly is the argument for the idea that no condition
> class is ever inherently continuable?
I haven't given one. My argument is more pragmatic and aesthetic
than essentialist. However, if I signal a condition, I'm saying
that something has occurred. Why should this ever require that
I also provide a way to continue? It looks like a separate decision
to me.
> (In Talk, continuability is
> always a question of the protocol between the signaler and the handler
> for a given condition class. It is not an explicit notion.)
This is better, and I might not mind it if it were sufficiently well
confined. But I don't want us to start requiring this in all kinds of
cases, and I think it has various problems.
-- jd
* write-ups from Harley
:From: Jeff Dalton
> > BTW, do we have null-terminated strings in EuLisp? I think it
> > would be a good idea. Also a way to test for the null char.
> > I agree. Of course, we should be careful to also allow the length to
> > be explicitly coded in the string.
> We shouldn't give up the rule that strings can contains any characters
> whatsoever.
Why not? Why is that more important than being able to give strings
to C?
> I think #\x0 must be excluded from the type <CHARACTER>
> if we want to allow implementations that use null-terminated
> strings.
I don't care one way or the other. The "null char" can be a
separate type. But it could be a character. Separate character
and string-character classes would not be required.
> Never specify that the set of characters has at least 256 elements.
Is that the current limit? What are we going to do about
international character sets?
> Otherwise we end up with Common-Lisp's distinction between CHARACTER
> and STRING-CHAR.
STRING-CHAR has been removed. This was part of adapting to
international character sets.
-- jd
* Harley wirte-ups: defliteral
:From: Jeff Dalton
> > What you say is entirely true, and we could adopt that approach.
>
> Let's then. But is it clear what "that approach" is? (See below.)
>
> We shouldn't do something just because we can. For example, the fact
> that your idea is rather complex to explain is a consideration.
I don't think my idea is even slightly complex to explain, especially
compared to Talk's defglobal trick. I still don't understand why
that works until I think for a minute or so.
> But the workaround (using a defglobal instead of or paired with a
> defliteral) is tricky too.
>
> But it follows an extremely clear model of module processing/execution.
If anything, it discredits that model.
> In other words, in Talk, in no sense is any
> form ever evaluated during module preprocessing. In your proposal,
> some forms are evaluated. We considered the simple model to be very
> important.
I think my model is simpler overall. Moreover, I think this rule
about modules is distorting more and more of the language. That
we can't even have a simple way of defining constants is going
too far.
> (+ 1 1) is an expression containing only constants in the sense that
> I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral.
>
> Is '+' a literal? If so, what about user-defined functions? This
> problem is definitely harder in EuLisp than in, say, C, because even
> the core functions are supposed to be redefinable and treated as
> normal bindings.
Is this so? When did it happen? People used to be concerend about
being able to analyze code. Are you now telling me that when the
compiler looks at a module in which + appears it can't tell what +
means? If so, then either we've broken the ability to analyze code
or the analysis is incredibly wimpy.
> > Finally, at least in our experience it is not too troublesome to put
> > defliterals apart simply because most systems already have one or more
> > compile-time macro modules so no new module is needed just for literals.
>
> That's fine when the expressions don't refer to other literals.
> Needing a module for each level makes this whole approach to modules
> look wrong.
>
> The defglobal solution suggested in the proposal gets around requiring
> more than 1 level. That's its purpose.
I know that's its purpose. But far from making this approach look
good, it makes it look like it must be based on a mistake.
-- jd
* Harley write-ups: etc
:From: Jeff Dalton
> > (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e"
> >
> > This was seen to be the most frequent use of the various pathname
> > merging functions from Le-Lisp, and so we made this case as easy as
> > possible.
>
> I'm used to the default idea. It doesn't have to be compatible
> with CL's merge-pathnames. Maybe this is OK too, but if so I
> think the documentation could do more to suggest a "model"
> for understanding what roles the arguments play. The defaulting
> idea is a model I find helpful.
>
> I think it's complicated to understand for beginners, and really
> confusing for people used to C (cf Richard's message.)
What is it about C or Unix that makes it confusing? It's evidently
something I never encountered, and Richard's message didn't say what
it was either.
BTW, when I first encountered merging (not in Lisp or Unix, though
I can't rememeber for sure where it was), merging with defaults
seemed very natural to me. I still don't have a simple model for
whatever it is you're doing.
> Humm. I'm not sure what all the implications of this are.
> If I can use printf in both Lisp and C, I'd expect the output to
> go to the same place and be interleaved in the obvious way.
> Indeed, it's a pain when using C with some Lisps that output
> to the same destination is independent.
>
> EuLisp printf needs to flush immediately for good interleaving. As
> far as prin goes, the flushing is explicit (or implicit with print).
> This is no worse than C++ streams, and people seem to live with that.
My point is this: if we have printf, I want these other things to
be true. If they aren't then printf starts being accompanied by
pitfalls. I don't want a big pitfall list for EuLisp. (My Common
Lisp one is only a few pages, but I've been forgetting to put things
in.)
-- jeff
* Harley write-ups: etc
:From: Harley Davis
Date: Tue, 7 Dec 93 16:38:07 GMT
:From: Jeff Dalton
> * The #f syntax is taken for filename objects. I would prefer
> that it be available for people who want to follow Scheme
> conventions.
>
> I have no opinion on this. We just chose #f for obvious reasons and
> because we don't give a hoot about Scheme compatibility.
If we called them "pathnames", we could use #p, I suppose.
Could work. We didn't do this to avoid confusion with other Lisp's
pathnames.
KCL uses #"...".
I hate this idea.
But do we have readtables in EuLisp these days? This is a good case
for testing whether our system is sufficiently flexible, because
someone might want Scheme syntax in some modules but not in others.
We don't have readtables, and I think they're a bad idea because they
make module processing even more complicated. Readtime vs. compile
time vs. load/run time -- yuck. I think another consideration was to
able to write the reader in lex/yacc without needing callbacks to
Lisp.
> (merge-filenames #f"/tmp" #f"some-file" ".e") -> #f"/tmp/some-file.e"
>
> This was seen to be the most frequent use of the various pathname
> merging functions from Le-Lisp, and so we made this case as easy as
> possible.
I'm used to the default idea. It doesn't have to be compatible
with CL's merge-pathnames. Maybe this is OK too, but if so I
think the documentation could do more to suggest a "model"
for understanding what roles the arguments play. The defaulting
idea is a model I find helpful.
I think it's complicated to understand for beginners, and really
confusing for people used to C (cf Richard's message.)
> * There's no fdopen (which I would find useful). Most of the
> POSIX-derived routines are FILE * routines, but some are
> (normally) for fds; so I'm a bit puzzled about what the rules
> are.
>
> OK, this is completely true. As I mentioned briefly in my last
> message, we have recently abandoned the FILE* metaphor for a purely
> fd-based metaphor. (This, since we wrote the doc.) So we replaced
> fopen with open, etc. (Problem: The mode specification with open is
> really annoying compared with fopen.) The buffering is then a purely
> Lisp-based notion, only conceptually derived from FILE*'s.
Humm. I'm not sure what all the implications of this are.
If I can use printf in both Lisp and C, I'd expect the output to
go to the same place and be interleaved in the obvious way.
Indeed, it's a pain when using C with some Lisps that output
to the same destination is independent.
EuLisp printf needs to flush immediately for good interleaving. As
far as prin goes, the flushing is explicit (or implicit with print).
This is no worse than C++ streams, and people seem to live with that.
-- Harley
* Harley wirte-ups: defliteral
:From: Harley Davis
> Your points are all valid concerns. Let's see what there is to say...
> * The rules for defliteral with respect to modules are far too
> restrictive. Expressions containing only constants (including
> literals defined by defliteral) should be evaluable at compile
> time. [...]
> What you say is entirely true, and we could adopt that approach.
Let's then. But is it clear what "that approach" is? (See below.)
We shouldn't do something just because we can. For example, the fact
that your idea is rather complex to explain is a consideration.
> In Talk we chose not to simply because it complicates the explanation of
> module processing, and we found that even with our simpleminded module
> system people often had some trouble understanding what was going on.
But the workaround (using a defglobal instead of or paired with a
defliteral) is tricky too.
But it follows an extremely clear model of module
processing/execution. In other words, in Talk, in no sense is any
form ever evaluated during module preprocessing. In your proposal,
some forms are evaluated. We considered the simple model to be very
important.
> Some people would also be a little surprised if (defliteral x (+ 1 1))
> was not a legal literal, since it looks like a constant expression.
> For these cases, you would have to explain why the literal needs to be
> in a syntax module.
(+ 1 1) is an expression containing only constants in the sense that
I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral.
Is '+' a literal? If so, what about user-defined functions? This
problem is definitely harder in EuLisp than in, say, C, because even
the core functions are supposed to be redefinable and treated as
normal bindings.
> Finally, at least in our experience it is not too troublesome to put
> defliterals apart simply because most systems already have one or more
> compile-time macro modules so no new module is needed just for literals.
That's fine when the expressions don't refer to other literals.
Needing a module for each level makes this whole approach to modules
look wrong.
The defglobal solution suggested in the proposal gets around requiring
more than 1 level. That's its purpose.
-- Harley
* write-ups from Harley
:From: Bruno Haible
> BTW, do we have null-terminated strings in EuLisp? I think it
> would be a good idea. Also a way to test for the null char.
>
> -- jeff
>
> I agree. Of course, we should be careful to also allow the length to
> be explicitly coded in the string.
>
> -- Harley
We shouldn't give up the rule that strings can contains any characters
whatsoever. I think #\x0 must be excluded from the type <CHARACTER>
if we want to allow implementations that use null-terminated strings. Never
specify that the set of characters has at least 256 elements. Instead
the range of numbers nnn for which #\xnnn is meaningful must be
restricted.
Otherwise we end up with Common-Lisp's distinction between CHARACTER
and STRING-CHAR.
Bruno Haible
* The 80s strike again
:From: Jeff Dalton
> It doesn't make life easier for me. There's a class of programmers,
> which doesn't include programmers like me, that's going to get this
> easier life.
>
> I think I explicitly mentioned that this would help a majority of
> programmers.
Is this some kind of utilitarianism, or what? (Utilitarianism has
the known flaw of allowing nasty things to happen to a few because
of benefits to the many.)
> Personally, given the choice between helping the
> minority of Lisp programmers vs. the vast majority of C/C++
> programmers, I prefer the latter.
I would rather benefit programmers by providing a better language than
by giving them what they already think they want. Moreover, I regard
this as a better long-term strategy, and I suspect you may even agree
with it. However, I don't see the point of trying to benefit via
programming language design a group of people who have very different
ideas than I do about what counts as a good language, especially
when there plenty of other people providing the kinds of languages
those people prefer. If this means I have to appeal to a minority
of C and C++ programmers, so be it.
> This necessarily means that the minority is slightly disgruntled.
Why does it *necessarily* mean this?
> I would have to say, too bad, Jeff,
> but I don't really think you would suffer very much, if at all.
This is voice of the 80s. Other factors count for nothing,
and an essentially economic decision prevails.
> The only alternative is to get them to do in Lisp things they wouldn't
> do at all otherwise. I find people willing to do so much in C and C++
> that I think the scope for this is small.
>
> Then you think Lisp doesn't have much future?
I think a couple of futures are still open.
Lisp could have a future much like its past, in which it is a somewhat
exotic AI language used in some kinds of research but only occasionally
for something more general. Scheme might survive as an educational
language.
Another possibility is that Lisp will survive as another tool in
the Unix / Windows NT toolbox, as well as having some specialized
appications of its own.
I don't know what factors are most important elsewhere, such as
among Ilog's customers, but here there are a couple of things that
work against Lisp.
One is that some people, informed by the usual misunderstandings
and prejudices about Lisp, decide that they can't use Lisp for a
project. I tell them that they're wrong about Lisp. It doesn't
have to be big and slow or run only on workstations. But that
doesn't do them any good so far as their project is concerned,
so they're understandably unconvinced. I need to point them to
an implementation that can produce small, efficient programs,
that can work with C and the X Window system, etc. Unfortunately,
I can't. This is a serious problem, and printf doesn't come into
it.
Another is that some useful applications start in Lisp but them
arrive in C++. (Ilog has done this.) People get the impression
that Lisp was inadequate rather than thinking Lisp made it easier
to develop the application.
Another factor, though less common, is that Lisp doesn't provide
good enough support for programming in the large. About a year
ago (Nov 92) we had an exchange about modules, libraries, etc.
I think Harley and I agreed that something larger than the current
modules is required. I also wanted to remove the parens that
end up surrounding all modules. I don't remember if anything
ever came of this, but I would like to revive the discussion.
-- jeff
* Harley write-ups: etc
:From: Harley Davis
> Can someone say what C specifies? (Richard?)
Ok (grouping corresponds to that in the C standard):
remove(), rename(), tmpfile(), tmpnam()
fclose(), fflush(), fopen(), freopen(), setbuf(), setvbuf()
fprintf(), fscanf(), printf(), scanf(), sprintf(), sscanf(),
vfprintf(), vprintf(), vsprintf()
fgetc(), fgets(), fputc(), fputs(), getc(), getchar(), gets(),
putc(), putchar(), puts(), ungetc(),
fread(), fwrite()
fgetpos(), fseek(), fsetpos(), ftell(), rewind()
clearerr(), feof(), ferror(), perror()
Note that this is *more* than what is in the proposal!
Here is what we did:
* We dumped all the xgetx, xputx, and fread/fwrite because:
1. If you want that level you can do it in C given that in Talk you
can pass a file stream to C and C gets the fd.
2. We didn't intend on sharing buffers anyway.
* tmpfile() is not particularly useful when you have tmpnam() and open().
* remove() is already taken by Lisp, so we used unlink().
* freopen() is not very useful.
* scanf() is too hard to get right in Lisp. (Do you pass in objects to
be modified? Does it return a list of values?) Plus, it's not all
that useful given that almost all I/O is done with READ. And, once
more, for those rare cases where it's useful, it can be done in C.
* the file position functions can be done in C.
* The error stuff is handled by the <posix-error> condition and the
interface. This avoids explicitly checking for errors at each call
and is thus considered an improvement and a better integration with
Lisp.
* eof is also better handled as a condition than an explicit check.
The v*printf() functions are irrelevant (we have rest lists), as are
the f* variants of getc() etc (which are just non-macro variants).
Right.
In addition, we probably *do* want to specify the fillbuf/flushbuf
functions (I don't have a POSIX document, but I assume it doesn't).
Whether we want set[v]buf depends on how we do that.
Well, we didn't put it in since file-streams don't share buffers with
C (especially if we target fd's rather than FILE*!)
We certainly don't want tcgetattr() and other such POSIXisms.
A subset of the terminal handling functions can be useful.
Essentially, just the part that manages cbreak mode. Also, isatty()
is quite useful.
-- Harley
* Harley write-ups: etc
:From: Harley Davis
Date: Tue, 7 Dec 93 17:44:23 GMT
:From: Richard Tobin
> I'm used to the default idea.
> Anyway, what do other people think?
Being used to the C/Unix way of doing things, I find Harley's version
clearer (expecially if it has a better name).
BTW, does the filename stuff allow for HTTP URLs? These look like
http://host[:port]/path/path/...
Currently, http: would be treated as a device and the rest as a simple
directory, without distinguishing the host field. If this is
considered important, we could also add a function which extracts the
host as a string.
-- Harley
* printf
:From: Harley Davis
> Now it looks to me like you're diverging from the POSIX routines
> in some significant ways, thought it's not clear exactly what they
> are. But once you diverge, having the same names is a somewhat
> mixed blessing.
>
> If you want to go with the sales argument, you can say that EuLisp has
> a POSIX binding with improvements, so C programmers using EuLisp have
> both a familiar set of functions and a more comfortable environment to
> use them.
That makes sense. We can say: see how nice even printf would be
if it wasn't embedded in a language that can just barely deal with
n-arg functions, or something along those lines.
Unbelievable. I actually said something that makes sense to Jeff.
Can we make this day into a EuLisp holiday?
> > I really do believe that printf is
> > better than some mutant format which is in any case based on printf.
>
> Was it based on printf?
>
> Please read section A.10.3 of EuLisp 0.99, page 54, remarks for the
> function format:
>
> "These formatting directives are intentionally compatible with the
> facilities defined for the function fprintf in ISO/EIC 9899 : 1990."
I think it's pretty clear that the EuLisp format function is
based on MacLisp / CL format, though no doubt we've also held
C in mind. (Hence the \n.) Indeed, this is so clear that I
thought you must be saying the original format was based on printf.
It's based on both printf and the old format, which is why I called it
a mutant. I guess "hybrid" would be more charitable (and genetically
accurate).
> The new directive %A prints all Lisp objects as if printed by prin.
> This will obviously handle numbers too.
But will the C directives handle these things? That was my point.
No, but you get a type test.
In addition, I need two Lisp data directives: one the prints escape
chars, quote marks around strings, etc; and one that doesn't.
In Talk, %A does print those things while %s does not. In other
words, %A is rereadable (to the extent that prin is for a given
object) while %s is human friendly.
> > How many unfortunate consequences could it have?
>
> Maybe it gets in the way of calling the real printf. Maybe it
> rules out too many extensions. Maybe POSIX-binding makes us too
> OS dependent.
>
> Unices are almost all POSIX compliant.
> Windows NT has a POSIX compliant module (and better ones available
> commercially.)
> VMS is now POSIX compliant.
> Even DOS has a goodly number of the POSIX functions.
Is this suppose to show it doesn't get in the way of calling the
real printf or that it doesn't rule out too many extentsions?
Or even that it doesn't bring us too close to the OS? It's
not just a matter of portability, it's also having losing OS
features intrude into the language.
You said POSIX was too OS-dependent, and I just tried to show that
most OS's today are POSIX compliant, so it is not much of a strike
against the proposal that it is based on POSIX rather than something
more OS-independent (like CL).
> I would be happy to propose a minimal foreign function interface (for
> C anyway) if people are interested.
I am, and there are some people on the net who would be too, because
they've been complaining about the lack of exactly that. From time
to time, I try to point such people at EuLisp, and I'd like to be
able to point them more effectively. Moreover I'm all the time
needing to call C things myself, although I call things like
waitpid which don't have the most accommodating interface.
I don't think a simple C interface (ie just calling foreign functions
and passing/returning data) will fix the problem with waitpid() and
friends. Indeed, it may be even worse in Lisp than in C without a
means to automatically translate enums and symbolic constants into
symbols. And I'm not about to propose a way to do that. However, the
interface will facilitate writing the part of the application that
deals with waitpid() in C and easily calling it, or calling Lisp from
it.
Perhaps you misunderstood me. %d and ~d makes it look like a EuLisp
printf is a good idea, instantly understandable to both Lisp and C
programmers. But that's the best example. There are others that
don't make things look so nice. %s and ~s do different things,
for instance. %X has a meaning that Lisp programmers won't expect,
thought they might get a clue from %e vs %E. And so on.
Yes, but most of these programmers already know printf. You, for
example, know both. Almost all our customers do mixed language
programming already. So the burden is rather small, even nonexistent
-- indeed, having just one set of directives to learn would be viewed
as a blessing by most.
> There are
> many important cases that aren't resolved by analogy with C's printf.
>
> But they're in the proposal.
Beside the point. Programmers won't be able to say "Humm. By
analogy with C printf, it probably works like this."
For most things they will.
> In any case this is a very small part of what programmers have to do
> when moving between Lisp and C.
>
> Naturally. Any implementation needs a foreign function interface. As
> I said, if people are willing to consider such a thing for the
> language (and I think the public would applaud a Lisp with a standard,
> even minimal, FFI), I would be happy to start the ball rolling with a
> proposal.
I'd like to see a proposal with the POSIX stuff simplified and an FFI
added.
I'll be happy to send along the Talk FFI stuff. I hope you'll be
willing to rework it plus the POSIX stuff into a form palatable for
inclusion in EuLisp.
> Is it because you're using non-POSIX functions or options?
It's chiefly because other people can't or didn't write portable code.
So this issue is not a problem for the proposal.
> Do you program for DOS or Windows 3?
I use only Unix, and stick to BSD when I can. The whole Sys V
excursion was a big mistake that will give the world to Microsoft.
As if it didn't belong to them already...
Making the same kind of mistake w.r.t. C++ will be a disaster too.
(The mistake is making something have a major impact by thinking
it will have a major impact and then acting accordingly, thus
bringing about a major impact that could otherwise have been
avoided. Meanwhile, some other folk, not so distracted, do
something else that wins.)
It seems to be a little late for C++; the "mistake" has been made.
You can't get gcc without a C++ compiler; you can't buy a PC C
compiler from Microsoft, Borland, IBM, or Watcom without C++ too.
However, I don't think of it as a disaster but rather an opportunity.
-- Harley
* Harley write-ups: etc
:From: Harley Davis
> I'm used to the default idea. It doesn't have to be compatible
> with CL's merge-pathnames. Maybe this is OK too, but if so I
> think the documentation could do more to suggest a "model"
> for understanding what roles the arguments play. The defaulting
> idea is a model I find helpful.
>
> I think it's complicated to understand for beginners, and really
> confusing for people used to C (cf Richard's message.)
What is it about C or Unix that makes it confusing? It's evidently
something I never encountered, and Richard's message didn't say what
it was either.
Usually, when programming with files in C, you just concatenate
strings. The model for merge-pathnames is string concatenation. This
is what makes it simpler.
You should also know that filenames aren't seen as having fields, as
opposed to pathnames. Functions like basename, extension, dirname
just process the filename as a string. Filenames are, however, better
than strings because 1. you can't create one with a bad syntax, 2.
they always use Unix notation regardless of the real OS (like Windows,
which uses \ instead of /.), 3. they allow better typed functions.
BTW, when I first encountered merging (not in Lisp or Unix, though
I can't rememeber for sure where it was), merging with defaults
seemed very natural to me. I still don't have a simple model for
whatever it is you're doing.
String concatenation, the simplest model possible.
> Humm. I'm not sure what all the implications of this are.
> If I can use printf in both Lisp and C, I'd expect the output to
> go to the same place and be interleaved in the obvious way.
> Indeed, it's a pain when using C with some Lisps that output
> to the same destination is independent.
>
> EuLisp printf needs to flush immediately for good interleaving. As
> far as prin goes, the flushing is explicit (or implicit with print).
> This is no worse than C++ streams, and people seem to live with that.
My point is this: if we have printf, I want these other things to
be true. If they aren't then printf starts being accompanied by
pitfalls. I don't want a big pitfall list for EuLisp. (My Common
Lisp one is only a few pages, but I've been forgetting to put things
in.)
I don't think there's a problem for printf in particular because the
proposed Lisp version flushes immediately and the C version also does.
There is a problem for prin vs. ftell() and company, but I believe
it's less serious in practice simply because you use ftell() much less
often than printf().
-- Harley
* The 80s strike again
:From: Harley Davis
Date: Tue, 7 Dec 93 20:08:07 GMT
:From: Jeff Dalton
> It doesn't make life easier for me. There's a class of programmers,
> which doesn't include programmers like me, that's going to get this
> easier life.
>
> I think I explicitly mentioned that this would help a majority of
> programmers.
Is this some kind of utilitarianism, or what? (Utilitarianism has
the known flaw of allowing nasty things to happen to a few because
of benefits to the many.)
Yes, it is utilitarianism, but the minority in this case does not
exactly suffer eternal damnation. At the worst, a few programmers who
have never used C will have to learn a new set of formatting
directives. Call it enlightened utilitarianism.
> Personally, given the choice between helping the
> minority of Lisp programmers vs. the vast majority of C/C++
> programmers, I prefer the latter.
I would rather benefit programmers by providing a better language than
by giving them what they already think they want. Moreover, I regard
this as a better long-term strategy, and I suspect you may even agree
with it. However, I don't see the point of trying to benefit via
programming language design a group of people who have very different
ideas than I do about what counts as a good language, especially
when there plenty of other people providing the kinds of languages
those people prefer. If this means I have to appeal to a minority
of C and C++ programmers, so be it.
Like I have said several times, this proposal could be replaced if
anyone had significantly better ideas. However, in the absence of
great new ideas, it seems wise to use a popular standard as a basis
for discussion.
> I would have to say, too bad, Jeff,
> but I don't really think you would suffer very much, if at all.
This is voice of the 80s. Other factors count for nothing,
and an essentially economic decision prevails.
No, if you suffered a lot, it would matter. But you don't. In fact,
if I may be so daring, you wouldn't suffer at all.
Another is that some useful applications start in Lisp but them
arrive in C++. (Ilog has done this.) People get the impression
that Lisp was inadequate rather than thinking Lisp made it easier
to develop the application.
Strangely enough, many of our new C++ clients have started
spontaneously asking if Ilog didn't have some sort of improved
prototyping/rapid development environment which works with C++...
Another factor, though less common, is that Lisp doesn't provide
good enough support for programming in the large. About a year
ago (Nov 92) we had an exchange about modules, libraries, etc.
I think Harley and I agreed that something larger than the current
modules is required. I also wanted to remove the parens that
end up surrounding all modules. I don't remember if anything
ever came of this, but I would like to revive the discussion.
Not only do we agree with the idea of having larger units, but we have
been doing something about it for a couple years now. For the parens,
we never had the problem.
-- Harley
* Harley wirte-ups: defliteral
:From: Harley Davis
Date: Tue, 7 Dec 93 19:49:08 GMT
:From: Jeff Dalton
> > What you say is entirely true, and we could adopt that approach.
>
> Let's then. But is it clear what "that approach" is? (See below.)
>
> We shouldn't do something just because we can. For example, the fact
> that your idea is rather complex to explain is a consideration.
I don't think my idea is even slightly complex to explain, especially
compared to Talk's defglobal trick. I still don't understand why
that works until I think for a minute or so.
It is for the functions called during literal evaluation. For instance,
(defun foo (x) ...)
(defliteral %l% (foo 5))
vs.
... import foo from m1 for execution ...
(defliteral %l% (foo 5))
vs.
... import foo from m1 for compilation ...
(defliteral %l% (foo 5))
vs.
... foo is defined in a std. lib ...
(defliteral %l% (foo 5))
Which works? What's the rule? Ugly cases can be generated at will.
> But the workaround (using a defglobal instead of or paired with a
> defliteral) is tricky too.
>
> But it follows an extremely clear model of module processing/execution.
If anything, it discredits that model.
You should try using it before shitting all over it.
> In other words, in Talk, in no sense is any
> form ever evaluated during module preprocessing. In your proposal,
> some forms are evaluated. We considered the simple model to be very
> important.
I think my model is simpler overall. Moreover, I think this rule
about modules is distorting more and more of the language. That
we can't even have a simple way of defining constants is going
too far.
It's not simpler because you have to add extraneous rules to
disambiguate cases like the one above. I'll be interested in seeing
the precise statement of your rule so I can find more tough cases to
throw at it.
We've gone down this route here and the result for us has always been
far too messy and complex, which is why we backed out to the current
state. If you can somehow combine execution and compilation
dependencies in a marvelous way that is both simple and allows the
creation of minimal applications, I'll be impressed. But I'm not
holding my breath.
> (+ 1 1) is an expression containing only constants in the sense that
> I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral.
>
> Is '+' a literal? If so, what about user-defined functions? This
> problem is definitely harder in EuLisp than in, say, C, because even
> the core functions are supposed to be redefinable and treated as
> normal bindings.
Is this so? When did it happen?
It's been this way for years. It's even worse: syntax is also
redefinable.
I'm surprised you're suggesting this solution without understanding
EuLisp's module system.
People used to be concerend about being able to analyze code. Are
you now telling me that when the compiler looks at a module in
which + appears it can't tell what + means?
It can sort of tell since we specify standard module names. But this
means that your rule is going to be even more hairy: The literal value
can use data literals, other symbolic literals, and functions defined
in the standard modules x, y, and z. (Do you also allow functions
from level-1 libraries? Won't the user be frustrated if his own
modules can't be used? etc.)
If so, then either we've broken the ability to analyze code or the
analysis is incredibly wimpy.
I completely agree with you. I have always complained about this
aspect of the EuLisp module system. However, some people (Dave for
instance) want to be able to rename any function so they can, for
example, have complete Scheme compatibility.
> The defglobal solution suggested in the proposal gets around requiring
> more than 1 level. That's its purpose.
I know that's its purpose. But far from making this approach look
good, it makes it look like it must be based on a mistake.
But it's not.
-- Harley
* Harley write-ups: default handlers
:From: Harley Davis
Date: Tue, 7 Dec 93 19:32:37 GMT
:From: Jeff Dalton
> The <fill-buffer>/<flush-buffer> conditions are just inherently
> continuable. (Or, if you really insist, they are always signaled
> continuably since the system signals them that way.)
The names suggest requests rather than conditions. <buffer-empty>
sounds more like a condition, and I don't see why it has to be
continuable.
It has to be continuable because you might be in the middle of a
(Lisp) READ, and not continuing could mess up some state.
The line counting and dimilar examples came up before. If they
require inherently continuable conditions, I'd rather lose the
examples.
Fine, modify the proposals to take out this part.
I think this is an issue we considered several times before and
we stayed with the approach we have now. I don't think we should
overturn all that as a side effect of adopting a stream proposal.
I also don't want to have to spend a lot of time defending those
decisions. Is it as essentail part of the stream proposal?
I think the proposal could work without it, but I don't have the time
to rewrite it. (Indeed, I don't really have the time to continue this
exponentially growing discussion, but I will anyway because I think
there is a chance of converging.)
> That is, I think we should make signalling independent of any
> provision for recovery. Default handlers encourage a different
> way of thinking in which the signaller has to know what the
> default handler does and be prepared to deal with it.
>
> What's wrong with this way of thinking for these conditions?
Why should I have to provide a way to continue? If you want to define
a protocol in which a way to continue is normally provided, that may
be ok. But if it becomes a property of the type, then pretty soon
every case where we can think of a useful way to continue will start
requiring this. On bad consequence of this, indicated in my earlier
message, is that programmers will signal a less appropriate condition
just to avoid dealing with continuing.
I don't see how this argument applies here. The programmer doesn't
signal these conditions, he just handles them.
> Default handlers also, of course, have the usual problems of such global
> arrangements, that when different parts of a system have different
> requirements it's difficult for them to fit together.
>
> If there is one default handler which is a unique object I don't see
> how this problem arises.
What do you mean? Different parts may want different defaults.
I don't understand this response. If one part defines a condition class,
then it gets to decide the default for that class. Nobody else does.
This is an extension of the rule that you can't define a method for a
specified gf specializing only on defined classes.
> The Le-Lisp experience showed that both dimensions of extensions are
> desirable. We could, of course, just say "tough" to the second type
> of extension and get rid of the conditions. I think this would be
> sad, especially since it's quite difficult to subclass file streams.
I would think that one of our aims would be to provide file streams
that were reasonably easy to subclass.
The problem comes up with the open function. Also, it may not be too
useful to subclass them. (In the proposal, modified to replace FILE*
with fd, almost everything streamable from POSIX becomes a file
stream, including pipes, fifos, sockets, displays, devices, etc.)
> What exactly is the argument for the idea that no condition
> class is ever inherently continuable?
I haven't given one. My argument is more pragmatic and aesthetic
than essentialist. However, if I signal a condition, I'm saying
that something has occurred. Why should this ever require that
I also provide a way to continue? It looks like a separate decision
to me.
Sometimes yes, sometimes no. Why take a firm stance? With errors it
is clearly the signaler who decides, but why should this always be the
case?
> (In Talk, continuability is
> always a question of the protocol between the signaler and the handler
> for a given condition class. It is not an explicit notion.)
This is better, and I might not mind it if it were sufficiently well
confined. But I don't want us to start requiring this in all kinds of
cases, and I think it has various problems.
I would be curious to know what problems you have in mind.
-- Harley
* The 80s strike again
:From: Jeff Dalton
> Yes, it is utilitarianism, but the minority in this case does not
> exactly suffer eternal damnation.
It's still suspect reasoning.
> Like I have said several times, this proposal could be replaced if
> anyone had significantly better ideas.
What's wrong with that's in there now (or in the past, if it's
been deleted)?
Anyway, I'd prefer something simple now, leaving some things
for additional libraries.
> However, in the absence of great new ideas, it seems wise to use a
> popular standard as a basis for discussion.
Ok.
> This is voice of the 80s. Other factors count for nothing,
> and an essentially economic decision prevails.
>
> No, if you suffered a lot, it would matter.
What you mean (it seems) is "yes, but so what?"
> Strangely enough, many of our new C++ clients have started
> spontaneously asking if Ilog didn't have some sort of improved
> prototyping/rapid development environment which works with C++...
You must be doing something right, then. Good work.
> Another factor, though less common, is that Lisp doesn't provide
> good enough support for programming in the large. About a year
> ago (Nov 92) we had an exchange about modules, libraries, etc.
> I think Harley and I agreed that something larger than the current
> modules is required. I also wanted to remove the parens that
> end up surrounding all modules. I don't remember if anything
> ever came of this, but I would like to revive the discussion.
>
> Not only do we agree with the idea of having larger units, but we have
> been doing something about it for a couple years now. For the parens,
> we never had the problem.
Well, I think EuLisp modules have serious problems. To indicate how
serious I think they are, I said (last year) that with the then current
module system I would not often use EuLisp by choice. A number of
good ideas were suggested at that time, but the usual Keith - Harley -
Jeff disagrements prevented a consensus from emerging.
I'd like to do something about this, but I don't want to adopt
exactly what's in ITalk. I think it would be better to use Eulisp
for a distinct design.
-- jd
* Harley write-ups: default handlers
:From: Jeff Dalton
> > The <fill-buffer>/<flush-buffer> conditions are just inherently
> > continuable. (Or, if you really insist, they are always signaled
> > continuably since the system signals them that way.)
>
> The names suggest requests rather than conditions. <buffer-empty>
> sounds more like a condition, and I don't see why it has to be
> continuable.
>
> It has to be continuable because you might be in the middle of a
> (Lisp) READ, and not continuing could mess up some state.
READ has to be able to not continue after errors, so I don't
think this has to be a problem. In any case, I think it's
absolutely wrong to overturn carefully considered decisions
as a side effect of a stream proposal. It has to be considered
separately, and I don't think there's time now to do it
properly.
> I don't see how this argument applies here. The programmer doesn't
> signal these conditions, he just handles them.
Can't the programmer define new stream classes or new kinds of
buffering?
Anyway, the argument is about the general issues of introducing
default handlers and making some conditions inherently continuable.
> What do you mean? Different parts may want different defaults.
>
> I don't understand this response. If one part defines a condition class,
> then it gets to decide the default for that class. Nobody else does.
> This is an extension of the rule that you can't define a method for a
> specified gf specializing only on defined classes.
This rule only says (at best) who wins. If there's a condition
class, different parts of the program may want to handle it in
different ways by default. The classes involved may be user-defined
as well. (Remember that I'm talking about introducing default
handlers, not about having a special case for two conditions
with no facility for anything similar for other condition types.)
> I would think that one of our aims would be to provide file streams
> that were reasonably easy to subclass.
>
> The problem comes up with the open function. Also, it may not be too
> useful to subclass them. (In the proposal, modified to replace FILE*
> with fd, almost everything streamable from POSIX becomes a file
> stream, including pipes, fifos, sockets, displays, devices, etc.)
Then FILE sounds like a misleading name.
BTW, if you use fds instead of FILE*s, I still think fopen makes
sense as a name, because you're getting a stream, which is the
EuLisp analogue for a FILE *, rather than an fd.
> > What exactly is the argument for the idea that no condition
> > class is ever inherently continuable?
>
> I haven't given one. My argument is more pragmatic and aesthetic
> than essentialist. However, if I signal a condition, I'm saying
> that something has occurred. Why should this ever require that
> I also provide a way to continue? It looks like a separate decision
> to me.
>
> Sometimes yes, sometimes no. Why take a firm stance? With errors it
> is clearly the signaler who decides, but why should this always be the
> case?
I'm not trying to rule it out forever, if sufficiently good reasons to
do something else come along. But once we start requiring continue
handlers (for lack of a better name) it will be difficult to go back.
> > (In Talk, continuability is
> > always a question of the protocol between the signaler and the handler
> > for a given condition class. It is not an explicit notion.)
>
> This is better, and I might not mind it if it were sufficiently well
> confined. But I don't want us to start requiring this in all kinds of
> cases, and I think it has various problems.
>
> I would be curious to know what problems you have in mind.
I tried to explain in my message. It's kind of frustrating to get a
reply like this, because it looks like everything I said had no
effect.
-- jd
* Harley write-ups: etc
:From: Jeff Dalton
> What is it about C or Unix that makes it confusing? It's evidently
> something I never encountered, and Richard's message didn't say what
> it was either.
>
> Usually, when programming with files in C, you just concatenate
> strings. The model for merge-pathnames is string concatenation. This
> is what makes it simpler.
>
> You should also know that filenames aren't seen as having fields, as
> opposed to pathnames.
Doesn't matter for the point I'm making here.
> [...]
> String concatenation, the simplest model possible.
Well, when I look at the proposal, string concatenation is not
what springs to mind. In one case, the directory part of the
first arg is used as a default, in another it's concatenated
onto the directory of the 2nd arg (but not to the front of the
entire 2nd arg); for devices, there's an exclusion rule instead
of any concatenation; and it's not always clear what happens to
extensions.
-- jd
* Harley wirte-ups: defliteral
:From: Jeff Dalton
> I don't think my idea is even slightly complex to explain, especially
> compared to Talk's defglobal trick. I still don't understand why
> that works until I think for a minute or so.
>
> It is for the functions called during literal evaluation. For instance,
>
> (defun foo (x) ...)
>
> (defliteral %l% (foo 5))
But this isn't "why that [the defglobal trick] works". (I know why
it works, but repeatedly find it counterintuitive.)
> > But it follows an extremely clear model of module processing/execution.
>
> If anything, it discredits that model.
>
> You should try using it before shitting all over it.
But I have tried it. I've used Ilog Talk modules and (via FEEL)
EuLisp ones. I also implemented Eulisp modules once upon a time.
> I think my model is simpler overall. Moreover, I think this rule
> about modules is distorting more and more of the language. That
> we can't even have a simple way of defining constants is going
> too far.
>
> It's not simpler because you have to add extraneous rules to
> disambiguate cases like the one above. I'll be interested in seeing
> the precise statement of your rule so I can find more tough cases to
> throw at it.
So far I haven't see any tough case for my rule, only something
that indicates Eulisp modules may have made analysis effectively
impossible short of looking at the whole program.
> We've gone down this route here and the result for us has always been
> far too messy and complex, which is why we backed out to the current
> state. If you can somehow combine execution and compilation
> dependencies in a marvelous way that is both simple and allows the
> creation of minimal applications, I'll be impressed. But I'm not
> holding my breath.
Every language I have ever heard of that allows literal constants
to be defined allows the definitins to refer to other literals
defined in the same module (or equiv structure).
> > (+ 1 1) is an expression containing only constants in the sense that
> > I had in mind. So is (/ %pi% 2) when %pi% is defined by defliteral.
> >
> > Is '+' a literal? If so, what about user-defined functions? This
> > problem is definitely harder in EuLisp than in, say, C, because even
> > the core functions are supposed to be redefinable and treated as
> > normal bindings.
>
> Is this so? When did it happen?
>
> It's been this way for years. It's even worse: syntax is also
> redefinable.
>
> I'm surprised you're suggesting this solution without understanding
> EuLisp's module system.
What about the module system do I not understand? I think I
understand all too well that it has a number of losing features.
I just thought we hadn't gone so far as allowing assignment to
core names. Renaming is a different matter, about which more
below.
Assignment across module boundaries is a bad idea in general,
in my view, because the compiler has to look at all client modules
to tell whether an exported name is subject to assignment.
This makes it kind of difficult to compile at all, as compilation
is usually understood.
> People used to be concerend about being able to analyze code. Are
> you now telling me that when the compiler looks at a module in
> which + appears it can't tell what + means?
>
> It can sort of tell since we specify standard module names. But this
> means that your rule is going to be even more hairy: The literal value
> can use data literals, other symbolic literals, and functions defined
> in the standard modules x, y, and z. (Do you also allow functions
> from level-1 libraries? Won't the user be frustrated if his own
> modules can't be used? etc.)
Users will be pretty frustrated if they can't do things like
(defliteral %pi-over-two% (/ %pi% 2)) without defining two levels of
module or using the defglobal trick. That we need the concept "can be
evaluated at compile time" doesn't sound that bad to me. I understand
it, anyway, and simple cases will be obvious to all.
> If so, then either we've broken the ability to analyze code or the
> analysis is incredibly wimpy.
>
> I completely agree with you. I have always complained about this
> aspect of the EuLisp module system. However, some people (Dave for
> instance) want to be able to rename any function so they can, for
> example, have complete Scheme compatibility.
Renaming doesn't bother me too much, although it makes it difficult
for someone to tell what's going on by local inspection. The compiler
can figure out what's called what (and what, say, + means) by looking
at the module definition and at what "server" modules (w.r.t. the
module being compild) have exported. But if some random module that
uses a core module can just say (setq car foo), we're in trouble.
> > The defglobal solution suggested in the proposal gets around requiring
> > more than 1 level. That's its purpose.
>
> I know that's its purpose. But far from making this approach look
> good, it makes it look like it must be based on a mistake.
>
> But it's not.
Is so.
-- jd
* Defliteral vs defconstant
:From: Jeff Dalton
> > Is '+' a literal? If so, what about user-defined functions? This
> > problem is definitely harder in EuLisp than in, say, C, because even
> > the core functions are supposed to be redefinable and treated as
> > normal bindings.
> Is this so? When did it happen?
> It's been this way for years. It's even worse: syntax is also
> redefinable.
> I'm surprised you're suggesting this solution without understanding
> EuLisp's module system.
I just got a copy of the 0.99 definition onto the machine at home
to look up exactly what the rules are. I'd also thought about
defliteral, noted that there was no point in defining one that
wasn't exported, and though about a more general mechanism based
on constant/mutable bindings. It turned out that this is more or
less what EuLisp has now, which is pretty much what I thought it
had. Ordinary function definitions can't be changed by assignment
in random modules; defconstant can be used to define a name that
can't be assigned to. Compiler optimizations can do the rest;
so I don't think defliteral is needed.
BTW, I've long since accepted that macros aren't going to be usable
in the module in which they're defined in EuLisp; so that more
general issue (general because literals are analogous to zero-
parameter macros) is not in dispute.
-- jd
* Defliteral vs defconstant
:From: Harley Davis
Date: Thu, 9 Dec 93 03:21:24 GMT
:From: Jeff Dalton
I just got a copy of the 0.99 definition onto the machine at home
to look up exactly what the rules are. I'd also thought about
defliteral, noted that there was no point in defining one that
wasn't exported, and though about a more general mechanism based
on constant/mutable bindings. It turned out that this is more or
less what EuLisp has now, which is pretty much what I thought it
had. Ordinary function definitions can't be changed by assignment
in random modules; defconstant can be used to define a name that
can't be assigned to. Compiler optimizations can do the rest;
so I don't think defliteral is needed.
I am opposed to any language feature which requires a smart compiler.
Either we require the processing to be done at compile-time, which
means either an interpreter or smart compiler in the compilation
environment, or the literal calculation is optional. If it requires
processing of a form in the compilation environment, it would be the
only such case and make implementations harder. If it's optional, it
loses the guarantee of Talk's defliteral, which is that the literal is
really substituted where it is referenced.
The simplicity and attractiveness (to us, anyway) of defliteral arises
primarily from the fact that it's a no-brainer and fits in easily with
the basic module processing view I've outlined before. (ie, nothing
evaluated in a module being compiled.)
-- Harley
* Streams
:From: Richard Tobin
Let's see if we can find some consensus on the streams issue.
Starting at the top and working downwards:
printf vs format
I don't think this is much of an issue. I believe Jeff will go along
with printf if the rest of the system is OK.
read, print etc
I think these are uncontroversial. If we want to have streams that
are, say, queues of lisp objects, this is a level at which the user
must be able to add extensions, so these should be generic functions
that discriminate on the stream (or they should be wrappers for such
functions).
getchar, putchar etc
We should have these. The standard methods for read and print should
call them, or behave as if they do (see below). Previous proposals
have had these be generic functions so that the user can implement
new types of character stream; this is inefficient and it seems to
me to be the main advantage of Harley's proposal that it overcomes
this.
If we adopt Harley's approach, getchar is just a simple function that
extracts the next character out of a buffer. It can be compiled inline
(perhaps) or read can just get the character itself. The extensibility
is provided by allowing the user to provide the functions to fill/empty
the buffer.
How the buffer filling/emptying is done is the controversial point.
In Ilog Talk, this is done by signalling a condition. Jeff has
objected to this because it either requires default handlers (which we
have rejected), or some special purpose mechanism. An alternative is
just to have generic functions that fill and flush the buffers. This
doesn't allow the user to add functionality to an existing stream
instance (eg to count characters) but it does allow him to create a
new class of stream that does this. And it would be possible wrap
such a stream around an existing instance of a stream (the fill-buffer
method would just repeatedly call getchar on the existing stream).
I propose that we adopt the generic function approach.
It occurs to me that we can specify a bit more and make this even more
like C/Unix. Instead of providing fill/flush buffer functions, we can
have something corresponding to Unix's file descriptors. These would
support reading and writing a block of characters. I can't
immediately think of a good name but let's call them ustreams for now
(note that they're not a subclass of stream - they don't support any
of the normal stream operations).
The operations on ustreams would be:
(ustream-open path mode) -> ustream
(ustream-close ustream)
(ustream-read ustream) -> string
(ustream-write ustream string)
(ustream-seek ustream position) -> position
There would now be three layers to the system:
read, print (Unique to Lisp)
--------------------------
fopen, fclose, fprintf, getc, putc, fseek (Correspond to C's stdio)
--------------------------
ustream-* (Correspond to Unix system calls)
Read and print apply to all streams. Their default methods use getc
and putc, or appear to. Extending to non-character streams is done by
defining new methods for read and print.
Fprintf etc apply only to character streams and are not generic. They
fill and flush their buffers by calling the ustream functions.
The ustream functions are generic. Extending to new kinds of character
stream is done by defining new methods on ustream-*.
(Note that in this approach I'm including what we previously called
integer streams as a kind of character stream. Maybe we should refer
to "buffered-byte-streams" instead. put-integer would work [as if] by
decomposing the integer into bytes and calling putc repeatedly.)
-- Richard
* Eulisp 0.99 syntax definitions
:From: Jeff Dalton
Where does the new notation for syntax definitions come from?
Why did you pick this rather than the more traditional style
you used before?
It's interesting. I'm not sure whether I like it or not.
-- jeff
* Defliteral vs defconstant
:From: Jeff Dalton
> I am opposed to any language feature which requires a smart compiler.
It doesn't require a very smart compiler. Pre-evaluating
some expressinos at compile time is pretty standard stuff.
Moreover no special tricks whatsoever are required for the case
where a defconstant is exported, which is the only useful case
for defliteral.
I think this approach is simpler, easier to understand, and closer
to what other languages do. Moreover, it requires no changes
to the definition.
BTW _were_ you saying that + and the like could be assigned to?
-- jeff
* Streams
:From: Jeff Dalton
> Let's see if we can find some consensus on the streams issue.
I think something somewhere in this area wouild be reasonable.
In any case, I'd like to retain the buffer-based approach that
Harley suggested, since it looks like an excellent way to avoid
the problem of efficient but flexible streams. Much better
than the generic read-char ideas we were messing with before
(though generic read-char can be fast in the standard case,
it requires complex or duplicated code and isn't fast when
the user starts defining stream classes.)
I'd also like to see a simple foreign function interface
and (this is in a different area) a way to define new function
classes. Is this already possible in EuLisp? If not, or if
it's too hard to use, the approach taken in Ilog Talk looked
reasonable to me.
-- jeff
* Defliteral vs defconstant
:From: Harley Davis
Date: Mon, 13 Dec 93 10:58:16 GMT
:From: Jeff Dalton
> I am opposed to any language feature which requires a smart compiler.
It doesn't require a very smart compiler. Pre-evaluating
some expressinos at compile time is pretty standard stuff.
Which expressions?
Moreover no special tricks whatsoever are required for the case
where a defconstant is exported, which is the only useful case
for defliteral.
I don't understand.
I think this approach is simpler, easier to understand, and closer
to what other languages do. Moreover, it requires no changes
to the definition.
You still haven't explained exactly what constitutes a constant
expression for EuLisp, and whether the compiler (LPU) is required to
evaluate them or not.
If the compiler is not required to evaluate such constant expressions,
then defliteral is orthogonal to defconstant (and I would even suggest
that defconstant is not very useful given deflocal.)
BTW _were_ you saying that + and the like could be assigned to?
No, we decided a long time ago that defining forms made constant
bindings. But what constitutes exactly the "and the like" for your
proposal?
-- Harley
* Streams
:From: Julian Padget
I'm not sure when Dave is going to send it out, but he and I spent
some time working on streams at GMD the week before last. It could
satisfay the "something somewhere in this area" criterion! It extends
Harley's model to support streams of objects where the source or sink
is not a file, but it retains the non-generic buffered approach for
character input/ouptut. Dave was last seen reworking the streams
section of 0.99 to describe this scheme and he sent me mail on Friday
to say it was nearly done.
--Julian.
* Streams
:From: Jeff Dalton
> I'm not sure when Dave is going to send it out, but he and I spent
> some time working on streams at GMD the week before last. It could
> satisfay the "something somewhere in this area" criterion! It extends
> Harley's model to support streams of objects where the source or sink
> is not a file, but it retains the non-generic buffered approach for
> character input/ouptut.
Does anyone have a view on Richard's suggestion that there be
an analogue of the FILE * / fd distinction (ustreams)?
Ustreams were the basic sources and sinks.
-- jeff
* Defliteral vs defconstant
:From: Jeff Dalton
> Date: Mon, 13 Dec 93 10:58:16 GMT
> :From: Jeff Dalton
>
> > I am opposed to any language feature which requires a smart compiler.
>
> It doesn't require a very smart compiler. Pre-evaluating
> some expressinos at compile time is pretty standard stuff.
>
> Which expressions?
Typically, arithmetic. Only certain kinds of values make sense,
because you're anticipating the value it will have when the program
gets going.
> Moreover no special tricks whatsoever are required for the case
> where a defconstant is exported, which is the only useful case
> for defliteral.
>
> I don't understand.
I assumed some context from my earlier message. A defliteral in
M has to be exported and used in other modules M'. That same case
for defconstant requires no compile-time evals or tricky optimizations.
That is, if I do a defconstant in M and import it from M to M', how is
this different from having done a defliteral in M? I don't see why it
has to be interestingly different. The constant has a value that
cannot change, so it can be in-lined etc.
> I think this approach is simpler, easier to understand, and closer
> to what other languages do. Moreover, it requires no changes
> to the definition.
>
> You still haven't explained exactly what constitutes a constant
> expression for EuLisp, and whether the compiler (LPU) is required to
> evaluate them or not.
I'm happy to leave that up to implementations, just as I'm
happy to leave in-lining generally up to implementations.
However, a constant expression is basically an expression whose value
can't change.
> If the compiler is not required to evaluate such constant expressions,
> then defliteral is orthogonal to defconstant (and I would even suggest
> that defconstant is not very useful given deflocal.)
In my view, defconstant subsumes all useful instances of defliteral.
It also has other uses. Why should I be able to get a constant
binding only via defun and friends?
> BTW _were_ you saying that + and the like could be assigned to?
>
> No, we decided a long time ago that defining forms made constant
> bindings. But what constitutes exactly the "and the like" for your
> proposal?
I was wondering what _you_ were saying, so maybe you'll have to fill
that in, if you want. However, I meant at least the names defined by
EuLisp (as renamed by module madness, etc). For instance, if someone
imports the eulisp level 0 module w/o renaming, can the compiler
tell that = and car have their usual meanings?
-- jd
* Streams
:From: Dave De Roure
> I'm not sure when Dave is going to send it out, but he and I spent
> some time working on streams at GMD the week before last. It could
> satisfay the "something somewhere in this area" criterion! It extends
> Harley's model to support streams of objects where the source or sink
> is not a file, but it retains the non-generic buffered approach for
> character input/ouptut. Dave was last seen reworking the streams
> section of 0.99 to describe this scheme and he sent me mail on Friday
> to say it was nearly done.
I'm modifying it in the light of (some of) the discussion - I've also
tried to produce code to demonstrate that we can do (some of) the things
we want with it. I'll post the amended definition text for comment
tomorrow (v. busy today).
-- Dave
* FFI proposal
:From: Richard Tobin
A few comments on the FFI.
I don't think it's reasonable to include anything in the language that
requires a conservative GC.
The conversion of streams to file descriptors only makes sense for
POSIX systems. We should say that it produces an integer under POSIX,
but may produce something else in other systems.
I assume the number on the end of DEFINTERN is the number of
arguments, though this is not explicitly stated. If so, why not make
it an argument of the macro instead? Remember you can do
#define DEFINTERN(cname, lname, args) DEFINTERN##args(cname, lname)
I don't think the lack of computed access to functions by name is a
serious problem. Many implementations will have such access
internally (at least for exported functions). Others (eg those in
which all module linking is done statically) may have to provide some
kind of linker. If you want to resolve it in the language, there could
be a module syntax for exporting functions to foreign languages or a
dynamic mechanism for puttint lisp functions into a table that could be
accessed from C.
Should the name of the lisp function be a C string rather than just
the name? If it's just the name, the macro can stringify it if
necessary, or munge it into some magic name to be recognised by the
linker. It can't destringify it however, so having it be a string
rules out the second alternative. (Of course, if it can contain
characters invalid in C identifiers it can't do that anyway.)
-- Richard
* FFI proposal
:From: Harley Davis
In article Richard Tobin writes:
A few comments on the FFI.
I don't think it's reasonable to include anything in the language that
requires a conservative GC.
OK, as I stated in the introduction, the only difference is
eliminating the things which reference ptr and require <address> for
returning non-Lisp pointers. Personally, I don't care one way or
another on this issue, but it looks currently like conservative GC's
are carrying the day.
The conversion of streams to file descriptors only makes sense for
POSIX systems. We should say that it produces an integer under POSIX,
but may produce something else in other systems.
Where "something else" is of course useful as a stream...
I assume the number on the end of DEFINTERN is the number of
arguments, though this is not explicitly stated. If so, why not make
it an argument of the macro instead? Remember you can do
#define DEFINTERN(cname, lname, args) DEFINTERN##args(cname, lname)
Argument accepted.
I don't think the lack of computed access to functions by name is a
serious problem. Many implementations will have such access
internally (at least for exported functions). Others (eg those in
which all module linking is done statically) may have to provide some
kind of linker. If you want to resolve it in the language, there could
be a module syntax for exporting functions to foreign languages or a
dynamic mechanism for puttint lisp functions into a table that could be
accessed from C.
Perhaps the C syntax should mention a module and a function name to
allow more implementation possibilities. The restriction would be
that the module named must the one defining the function in question,
rather than one which might import the function (and possibly rename
it locally). So: DEFINTERN(cname, lmodule, lname, args).
Should the name of the lisp function be a C string rather than just
the name? If it's just the name, the macro can stringify it if
necessary, or munge it into some magic name to be recognised by the
linker. It can't destringify it however, so having it be a string
rules out the second alternative. (Of course, if it can contain
characters invalid in C identifiers it can't do that anyway.)
And it can, so there you are. That's the reason. (Simple example:
the character '-'.)
Alternatively, we could have a sort of extern "C" form in Lisp, in
which the external identifiers were limited to C syntax. I personally
think this would be unwieldy; DEFINTERN is much easier to use and it's
in the right place.
-- Harley
* Streams
:From: Dave De Roure
> I'm not sure when Dave is going to send it out, but he and I spent
> some time working on streams at GMD the week before last. It could
> satisfy the "something somewhere in this area" criterion! It extends
> Harley's model to support streams of objects where the source or sink
> is not a file, but it retains the non-generic buffered approach for
> character input/ouptut. Dave was last seen reworking the streams
> section of 0.99 to describe this scheme and he sent me mail on Friday
> to say it was nearly done.
Right. Working this stuff into the definition, and writing code to
demo it, has been a useful nightmare, full of those beasties that
populate stream-hell...
Basically we perceived the same consensus as Richard summarised, ie character
or file streams with specific operations, an abstract stream class, and
generic stream operations with methods for character streams.
I need some help from you all on a few issues that have emerged:
The default handler approach to buffer fill and flush operations is
attractive because it gives the programmer two orthogonal techniques:
they can put methods on the generic function (essentially a `static'
approach at level 0) or they can introduce new handlers (a `dynamic'
approach). After discussion at GMD, Julian and I both became convinced
of the handler solution. Since we don't have default handlers I had
decided, in the spirit of compromise, to specify the generic function
solution in the definition, thinking this admits the handler solution
as a possible implementation technique. But I'm no longer sure we can
postpone this decision - I think the user should know if there are
conditions being raised. Incidentally, there is a third (orthogonal?)
approach, which is to specify the fill/flush functions by passing them
as arguments when the stream is created.
Q. Should the generic fill/flush functions be called directly or via
a default handler?
Ilogtalk doesn't have seek or unget operations. I propose that
we support these. In fact, I propose this rationale: we should have
a standard stream protocol (inc seek, unget etc) which could be applied
to any stream, and if an operation is unsupported for a particular
stream type, a condition is raised. This is to do away with the added
complexity of seekable streams (since you often don't know whether a
stream is seekable or not until you try it).
Q. Should we support seek and unget operations?
What of input-and-output streams? We once had a mechanism for taking
two streams and combining them into a single stream object which could
respond to both input and output operations.
Q. Should we support combined i-o streams in the definition?
We spent some time at GMD checking that the new streams would fit into
the definition wrt threads and collections. To do this neatly, we
created the <object-stream> class alongside the <character-stream>
(aka <file-stream>) class, with the intention that these streams would
basically be queues of objects (compatible with generic stream operations
and with collections). However, we didn't work this through fully.
Our file-streams are basically end-points (streams connected to some
external data source/sink); are the object streams also end-points
(which can be connected together, a la sockets)? For the purposes
of integrating collections with streams, I think what we actually
need is a <fifo-stream> class. Like the combined streams above,
one of these objects responds to input and output operations.
Q. Shall we introduce a <fifo-stream> class in the definition?
Finally, I have some sympathy with Richard's comment about URLs.
We could lead the field here by introducing a {file,path}name
mechanism which accommodates the extra information (probably just
host or address) needed by URLs and many other network-oriented
data services (eg in my own work, my filename objects have these
slots: host, protocol/drive, dirname, basename, extension).
If we have an abstract filename class then these extra slots
can be added. I'll not make a specific question of this, but
I would be interested in your opinion.
When we have some preliminary agreement on these answers, I'll send out
the corresponding definition-speak as a focus for further discussion.
Bye for now,
-- Dave
* Streams
:From: Harley Davis
Just a couple points, maybe more later...
In article Dave De Roure writes:
The default handler approach to buffer fill and flush operations is
attractive because it gives the programmer two orthogonal techniques:
they can put methods on the generic function (essentially a `static'
approach at level 0) or they can introduce new handlers (a `dynamic'
approach). After discussion at GMD, Julian and I both became convinced
of the handler solution. Since we don't have default handlers I had
decided, in the spirit of compromise, to specify the generic function
solution in the definition, thinking this admits the handler solution
as a possible implementation technique. But I'm no longer sure we can
postpone this decision - I think the user should know if there are
conditions being raised. Incidentally, there is a third (orthogonal?)
approach, which is to specify the fill/flush functions by passing them
as arguments when the stream is created.
Q. Should the generic fill/flush functions be called directly or via
a default handler?
We use the conditions for all sorts of interesting hacks. For
example, our pretty printer is based on trapping the flush-buffer
condition and not calling the gf in certain cases. (The idea is,
first you try to print a whole expression on one line. If that fails
-- ie, if flush-buffer is called while printing the expression b/c
right margin is passed -- then take the subexpressions and print them
one per line, indented appropriately.) This application is dynamic
control of all stream types, and ignores the details of the protocol.
Many other examples could be cited. So I like the condition approach
(of course.)
However, I don't think it is really problematic to just specify the
gf's and leave the conditions as an implementation approach, if the
distrust of default handlers is widespread. Is anybody else against
default handlers on principle?
Ilogtalk doesn't have seek or unget operations. I propose that
we support these. In fact, I propose this rationale: we should have
a standard stream protocol (inc seek, unget etc) which could be applied
to any stream, and if an operation is unsupported for a particular
stream type, a condition is raised. This is to do away with the added
complexity of seekable streams (since you often don't know whether a
stream is seekable or not until you try it).
Q. Should we support seek and unget operations?
We decided not to support them because the interaction with buffering
is hard to explain. This is more problematic in this stream system
than with FILE*'s because 1. the buffer is an accessible object, and
2. the high level operations (ie read) may not want to be always
checking that the file pos hasn't changed.
What of input-and-output streams? We once had a mechanism for taking
two streams and combining them into a single stream object which could
respond to both input and output operations.
Q. Should we support combined i-o streams in the definition?
No need; all streams in this proposal support both I/O. For file
streams, whether you can do one, the other, or both depends on the
mode passed to fopen. For other types of streams, it's left to the
stream author to control this.
We spent some time at GMD checking that the new streams would fit into
the definition wrt threads and collections. To do this neatly, we
created the <object-stream> class alongside the <character-stream>
(aka <file-stream>) class, with the intention that these streams would
basically be queues of objects (compatible with generic stream operations
and with collections). However, we didn't work this through fully.
Our file-streams are basically end-points (streams connected to some
external data source/sink); are the object streams also end-points
(which can be connected together, a la sockets)? For the purposes
of integrating collections with streams, I think what we actually
need is a <fifo-stream> class. Like the combined streams above,
one of these objects responds to input and output operations.
Q. Shall we introduce a <fifo-stream> class in the definition?
I don't think this is necessary. Object streams can just be created
by default to do both I/O.
Finally, I have some sympathy with Richard's comment about URLs.
We could lead the field here by introducing a {file,path}name
mechanism which accommodates the extra information (probably just
host or address) needed by URLs and many other network-oriented
data services (eg in my own work, my filename objects have these
slots: host, protocol/drive, dirname, basename, extension).
If we have an abstract filename class then these extra slots
can be added. I'll not make a specific question of this, but
I would be interested in your opinion.
This is not exactly leading the field; it's getting back to the
complicated pathname mechanism. What we like about filenames vs.
pathnames is that they encapsulate simple strings and use simple
string processing operations to extract and combine information; these
operations are based on pre-existing Unix commands. They don't have
slots, and are not mutable. They also have a fixed syntax across
OS's. This eliminates most of the major hassle of using CL-style
pathnames. If all of these fields are needed, perhaps filenames are
the wrong abstraction for EuLisp.
-- Harley
* Streams
:From: Julian Padget
Date: Thu, 16 Dec 1993 11:56:10 +0100
:From: Dave De Roure
[...]
I need some help from you all on a few issues that have emerged:
The default handler approach to buffer fill and flush operations is
attractive because it gives the programmer two orthogonal techniques:
they can put methods on the generic function (essentially a `static'
approach at level 0) or they can introduce new handlers (a `dynamic'
approach). After discussion at GMD, Julian and I both became convinced
of the handler solution. Since we don't have default handlers I had
decided, in the spirit of compromise, to specify the generic function
solution in the definition, thinking this admits the handler solution
as a possible implementation technique. But I'm no longer sure we can
postpone this decision - I think the user should know if there are
conditions being raised. Incidentally, there is a third (orthogonal?)
approach, which is to specify the fill/flush functions by passing them
as arguments when the stream is created.
Q. Should the generic fill/flush functions be called directly or via
a default handler?
I'm in favour of the combination of handler and generic function
because of the flexibility it provides. The issue that remains is how
I justify changing my opinion to accept default handlers.
I remember our talking for a long time about default handlers a few
years ago. I don't want to get into a debate about whether I
mis-remembered, but I think we finally finessed the issue on the
grounds that we had no errors for which there was any useful default
treatment other than either displaying it and perhaps entering a
debugger or a break loop and that was something felt to be part of the
"environment" and not to be specified.
Since then our ideas on threads have crystallized significantly and I
think this has some bearing on the matter. The proposal talks of a
single global handler, but I suspect that is going to be inconvenient
in a parallel world. We have also edged towards the recognition of a
per-thread handler in the treatment of unhandled conditions on threads
(the aborted state). So it seems to me that we have already almost
accepted default-handlers---we just do not provide access to them and
we still need not, although we can define additional default behaviour
for them.
Assuming we did agree to the handler + gf approach there are a number
of nasty beasties lurking: does each thread use the fill and flush gfs
directly or does each get a copy of the gf? If the latter, how do we
add methods to the thread-specific gfs? If we do allow access to a
thread's default handler, should it be a no arg function which returns
the handler for the current thread or should it be possible easily to
access (and therefore mess with) another thread's handler? (Of
course, a thread could always pass it's handler on anyway; it's just a
question of whether it can be taken or must be given).
My preferred solution is for there to be a single gf for each of fill
and flush which each thread uses (therefore any new methods
potentially affect all threads) and that we state the existence of a
default-handler (a generic function) for each thread which, by
default, has a method for fill-buffer and flush-buffer which call fill
and flush respectively. I am undecided about whether to make this
object accessible, but am tending towards that view, in which case I
prefer there to be a function called default-handler, of no arguments,
which returns a gf.
Ilogtalk doesn't have seek or unget operations. I propose that
we support these. In fact, I propose this rationale: we should have
a standard stream protocol (inc seek, unget etc) which could be applied
to any stream, and if an operation is unsupported for a particular
stream type, a condition is raised. This is to do away with the added
complexity of seekable streams (since you often don't know whether a
stream is seekable or not until you try it).
Q. Should we support seek and unget operations?
I sympathize with the second point Harley makes. Is the first one
suggesting that the program might mutate the buffer object thus making
unget and seek difficult to implement? Sounds a more heinous crime
than having unsynchronized high (read) and low (getc) level operations
on the same stream.
From this point on I got confused (io-streams, fifo-streams) since I
thought we had gone through all this in sufficient detail to be pretty
sure that it all worked. In particular, I thought we had concluded
that object streams could do everything we wanted (having gone through
the recognition of end-points etc.). What happened to change your
mind?
Finally, I have some sympathy with Richard's comment about URLs.
We could lead the field here by introducing a {file,path}name
mechanism which accommodates the extra information (probably just
host or address) needed by URLs and many other network-oriented
data services (eg in my own work, my filename o