Report on 1993 Perlis Symposium at Yale (Phil Pfeiffer)
Fri, 21 May 1993 19:39:41 GMT

          From comp.compilers

Related articles
Report on 1993 Perlis Symposium at Yale (1993-05-21)
Stroustrup's statistics (1993-05-22)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Phil Pfeiffer)
Keywords: conference
Organization: Compilers Central
Date: Fri, 21 May 1993 19:39:41 GMT

The following report on the 1993 Alan J. Perlis Symposium on Programming
Languages was originally written for students and faculty at ESU. I am
posting it to comp.compilers because I enjoyed listening to the talks, and
thought that others who didn't attend might enjoy learning about what they
missed. Be advised, however, that this report was written as an
afterthought; it was prepared from notes that I made at the conference,
and written without the help of electronic recording devices, prepared
texts of talks, or transparencies, and without the benefit of critiques by
speakers. Although I have made a good faith effort to reproduce the gist
of the talks, I cannot, in short, vouch for the report's complete

One other word of caution is in order. Since many talks--and this
includes good speeches--do not always read well when transcribed, I have
condensed some remarks, and rearranged parts of others in an effort to
improve the flow of the report. I apologize in advance for any mistakes
that I made in the attempt.

-- Phil Pfeiffer


              Report on the Second Annual Alan J. Perlis Symposium

On April 29, 1993, the Yale University of Computer Sciences Department
hosted its second annual symposium on programming languages. The
symposium is named for the late Alan J. Perlis, who spent the last two
decades of his life and career at Yale: Perlis is best remembered for his
contributions to the design of Algol 60, for which he received the first
Turing Award. This year's Perlis symposium featured Peter Naur, who
helped to develop the Backus-Naur Form for programming language grammars;
Ehud Shapiro, who developed Concurrent Prolog; Bjarne Stroustrup, who
developed C++; and David Turner, who developed the functional language
Miranda. The presentations, which each ran an hour in length, were
followed by an hour-long panel discussion.

Peter Naur's Presentation

The first speaker, Peter Naur, discussed the programming language
community's failure to achieve what were once regarded as two crucial
goals in programming languages research. Thirty-five years ago, when
Perlis and Naur were working on the definition for Algol 60, researchers
were hoping to devise a universal programming language for the programming
community. This language was to have the following two properties.
First, it would somehow help programmers to write error-free programs. It
would also promote code-sharing and code reuse by making it possible for
programmers to exchange texts readily. In Naur's words, this universal
language was to have been a language that "felt natural in the hands of a
programmer; one that would help the programmer *do right*."

The first goal, Naur argued, was never reached because it was based on an
unrealistic expectation of what language designers could do to help
programmers correct incorrect programs. There is little, if any, evidence
to support the proposition that a programming formalism exists that would
eliminate coding errors. Furthermore, languages cannot help users to
construct error-free specifications because the specification process
itself is incomplete: there is no formal way of relating a formal
specification to the 'real world' problem it was meant to solve.

The second goal--the quest for the universal programming language--has, in
Naur's view, proved unattainable for "deep" reasons. The search for such
a language was based on the fallacious assumption that there were natural
ways of coding programs. This assumption, in turn, followed from the
belief that every program modeled some aspect of the real world in a
natural fashion. This assumption that there are natural ways of writing
programs, said Naur, is false; "people", said Naur, referring to
individual approaches to solving problems, "differ enormously in any
respect you can imagine." As evidence for this point, Naur cited an
experiment in which he asked a class to first prepare study guides for
Turing's 1936 paper on computable numbers, and then to critique each
other's guides anonymously. Naur, who slipped his own guide into the lot,
said that there was no consensus at all about which guides were good, and
which were bad--his own included.

Another problem with the quest for a universal language is that it is
based on a misconception about the nature of programming. There is a
common assumption, said Naur, that the goal of programming is to produce a
text. Naur disputed this view: he holds that the task of the programmer
is not to produce a text, but to form a theory: an *understanding* of how
a certain data processing problem can be performed by a computer. This
view that texts are secondary consequences of theories is an unpleasant
one for many administrators, who view programming as a production process:
something that generates texts. What programmers exchange most often,
however, are not texts, but applications--final programs, together with
languages that go with them. [Naur, I believe, was asserting that a
running application was a far better tool for communicating a "theory"
than an unsupported program text.] And so, concluded Naur, "the whole
issue of *the* important programming language has faded away."

Ehud Shapiro's Presentation

The second speaker, Ehud Shapiro of the Weizmann Institute, developed
Concurrent Prolog while studying at Yale in the 1980s. Shapiro developed
Concurrent Prolog in response to (friendly) heckling from fellow graduate
students, who told him that logic programming languages could not be used
to develop "real" systems. Shapiro used his presentation to argue that
the expressiveness of Concurrent Prolog made it an excellent language for
systems development.

A Concurrent Prolog program is a series of predicates and guarded horn
clauses. A guarded horn clause is an expression of the form

      P :- G | Q1, ..., Qn.

where P :- Q1, ..., Qn is a standard Prolog horn clause, and G, a guard,
is a list of tests that controls the activation of the Qi. I did not have
time to note the details--Shapiro's talk moved quickly--but I believe that
these tests may either check the structure of a (shared) logic variable,
or determine if a variable is equal to--or differs from--a specified

The expressiveness of Concurrent Prolog, said Shapiro, derives, in part,
from the following natural correspondences between expressions in
Concurrent Prolog and objects in distributed computations:

*. Goal atoms like the Qi's in the above clause correspond to individual
        processes. The set of all goal atoms corresponds to the universe of
        distributed processes.

*. A clause corresponds to a rule for process behavior.

*. A guard corresponds to a rule for synchronizing processes.

*. A shared logic variable corresponds to an interprocess communications

*. The act of instantiating a shared variable corresponds to the act of
        sending a message between processes along a designated channel.

*. A clause whose guard is not fully instantiated corresponds to a blocked

By way of illustration, Shapiro showed the following three representative
Concurrent Prolog clauses, and gave the following interpretations of their

*. "P :- G | Q1, ..., Qn" means "P dies when G is satisfied, spawning
        processes Q1 through Qn."

*. "P :- G | q(....)" means "P changes state when G is satisfied."

*. "P :- G | true" means "P halts when G becomes true."

Guards are one of two key differences between standard and Prolog. The
second is indeterminacy: the Concurrent Prolog interpreter does *not*
backtrack. This, said Shapiro, corresponds to the behavior of realistic
systems, which take wrong turns and fail.

The expressiveness of Concurrent Prolog also derives from the
expressiveness of Prolog proper. Unification is a power mechanism that
can achiever the effects of variable assignment, equality testing, list
access and construction, parameter passing by value and reference, and
variable aliasing. The equivalence of programs and data also makes it
possible to do systems programming by writing a Concurrent Prolog
interpreter in Concurrent Prolog. This is important, said Shapiro,
because "the only beautiful way to do systems programming is by

Shapiro then substantiated his claim by presenting a series of Concurrent
Prolog interpreters written in Concurrent Prolog. Since the hand that
changed the transparencies was once again faster than the hand that copied
notes, I was not able to take down a complete example. The various
meta-interpreters, however, were uniformly short and simple. The first
interpreter, a vanilla meta-interpreter for Concurrent Prolog, was about
five clauses long; the last, an interrupt-handling,
process-snapshot-taking, termination-detecting meta-interpreter, was no
more than ten clauses in length. If the interpreter has a weakness--and
this was brought out by a subsequent question from the audience--it's
speed: Concurrent Prolog programs currently run more slowly than
comparable programs written in standard imperative languages. Shapiro
also noted that far more effort has gone into developing quality compilers
in imperative languages.

Shapiro finished with remarks on the past, present, and future of
concurrent logic programming (CLP). The past of CLP, said Shapiro, was
the Fifth Generation Computing Systems project, and its emphasis on
finding and exploiting parallelism in logic programs. This project, said
Shapiro, proved that boring programming applications should probably be
coded in FORTRAN.

Research in CLP is currently being directed toward improving these
languages. Shapiro mentioned concurrent constraint programming, or the
use of tests other than equality in guards, as one area of research. A
second is the development of a more restricted, single-writer subset of
Concurrent Prolog. One application for a single-writer Concurrent Prolog
is active mail, an Internet-wide distributed system that piggybacks
interactive connections onto e-mail.

The future of computer science, Shapiro believes, is the next Grand
Programming Challenge: programming cyberspace. Concurrent Prolog, Shapiro
concluded, is a good language for accomplishing this task.

Bjarne Stroustrup's Presentation

The afternoon's first speaker, Bjarne Stroustrup, discussed--for want of a
better characterization--the past, present, and future of object-oriented
programming (OOP). OOP, said Stroustrup, is an approach to professional
programming that "comes from the view that programs are to some extent
models of the real world."

Stroustrup motivated OOP by discussing the inadequacies of earlier
approaches to programming. The first and fundamental approach to writing
computer programs, the reduction of programs to bits and bytes, is too
low-level, verbose, and difficult to maintain. The second historical
approach, which introduced data abstraction into languages, is still
inadequate: programs become easier to code and maintain, but programs that
model the behaviors of related objects tend to contain messy multi-way
case statements. OOP, the next try, uses relationships between objects to
implement code. In particular, IS-A relationships (e.g., "a fire truck is
a vehicle") are represented as relationships between hierarchically
organized types, and implemented as methods within a class.

OOP represents another significant improvement in programming style, when
done well. Grady Booch's 140,000-line Ada data structures library, for
example, shrunk to 10,000 lines of code when rewritten in C++: the
run-time speed and memory consumption of the two libraries are identical.
The adoption of an OOP language for coding, however, does not
automatically lead to a better coding style. Stroustrup posted a slide
that read

            Government mental health warning:
            It is possible to write truly awful object-oriented programs.
            There is no substitute for

            - intelligence.
            - experience.
            - taste.
            - work.

Stroustrup noted that he himself had written some of these truly awful
programs, and that others like himself had gone through a learning curve
when adjusting to OOP. "It is important", said Stroustrup, "to emphasize
in an industrial world that there is no magic, however much managers want
to believe that."

The next part of the talk compared Simula, Smalltalk, and C++. Simula 67,
the first object-oriented (OO) programming language, was developed by Dahl
and Nygaard in 1967. ("One view of the world [of programming languages]
that's close to mine [is that] 'All good comes from Simula 67'".) Simula
67, which was based on Algol 60, featured classes and inheritance. It
supports a nice balance of static and dynamic type checking, has good
simulation libraries, and runs under most operating systems. Negative
features of Simula 67 include generated code that runs from medium fast to
slow; weak general purpose libraries; poor interoperability with other
languages; and development environment. Simula 67 is dying out, and has
little current use.

The remaining two OO languages mentioned in this part of the talk adopt
opposite approaches to type checking. Smalltalk, which was invented in 1972
and achieved its present form in 1984, was influenced by Simula and Lisp.
It does only dynamic type checking, which makes programs run slowly (but
fast enough for most interactive applications). It supports good general
libraries and a good development environment, which features especially good
support for graphics. Interoperability with other languages is poor.
There are about 10,000 Smalltalk programmers, and the number is increasing.

C++, Stroustrup's OO language, achieved its present form in 1985 (versions
of the language go back to 1980). C++ is basically C with Simula-like
classes. Stroustrup used a static type-checking algorithm, which was
needed to make the language run as quickly as C, and added minimal support
for dynamic type inquiry, which was needed for portability. Compiled code
is fast: fast enough for hard real-time. The language runs on most
operating systems, and will run on bare machines. It is interoperable
with most traditional languages, including C and Fortran. There are good
C++ libraries for most many applications. Finally, the number of C++
users has grown beyond Stroustrup's reckoning, and continues to increase
rapidly. Stroustrup said he quit counting at 400,000 users; a person in
the audience volunteered the information that Borland just celebrated the
shipment of its 1,000,000th C++ compiler.

The final part of the talk dealt with three future directions in
programming languages research. Stroustrup said that he thought of C++ as
a part of the bridge between the past and the future. ("The present is
always the between the past and the future ... as soon as we find an ideal
language for a class of problems, we've moved on.") One important
direction for future research, new and supposedly better languages,
includes new OO languages like Ada-9x, CLOS, and Eiffel. (A major impetus
for Ada-9x is the experiment with rewriting the Booch library, which
Stroustrup referred to as "the only commercially successful Ada software
product".) It also includes more research into support for concurrency
and distributed processing. Stroustrup, who did his Ph.D. at Cambridge on
concurrency, went away with the feeling that he knew forty incorrect ways
of supporting concurrency--that is, forty ways that were correct only for
a small subset of the user community. This is the principal reason why
C++ does not support concurrency directly: he prefers libraries and
special-purpose language extensions. Ada, said Stroustrup, is a really
good example of why one should not build such extensions into
general-purpose languages.

The second major direction for research is object-oriented design.
Stroustrup, however, talked little about this topic, and a good deal about
how design is currently done (or not done) in industry. Industry,
Stroustrup said, tends to force out all abstraction from programs, and to
force users to use subsets of C and COBOL. (This point was elaborated on
in the question-and-answer period following Turner's walk, where
Stroustrup talked about places that give hiring preferences to
morons--since they're interchangeable--and provide tools that give little
support for initiative). Stroustrup appeared pessimistic about the
chances for the speedy conversion to better languages and methods of
design, mostly because of effort and expense of retraining people who were
used to the old ways of designing programs.

This is why the third major area for research--tools and environments--is
the "Grail" of current research in programming languages. There is
currently a market for software libraries, and a need for better
development environments. New languages are needed that have the great
performance of statically type-checked languages, and the great
development environments that characterize dynamically type-checked
environments. He predicted that the two camps would steal each others'
ideas: that the emerging languages would balance static and dynamic type
checking, and provide an optional environment that understands the
language's syntax and type system, and a program's dependence structure.
Incremental computation is another must.

These changes, concluded Stroustrup, must proceed as fast as possible, but
no faster. "How do you take off a year to learn? It's expensive."

David Turner's Presentation

Turner said that he was originally going to talk about Miranda, but
decided to discuss other research instead. The thesis of this talk was
that functional programming (FP) is a very good idea, but the community
hasn't got it quite right yet. Conventional languages, which Turner
called *weak* languages, allow users to write programs that fail. In this
talk, Turner argued for a restricted style of functional programming that
uses functions that always return a valid result.

Turner began by reviewing the gospel of functional programming. A program
written in a conventional imperative language is typically difficult to
parallelize or prove correct; the use of assignment statements and side
effects makes such programs difficult to reason about. Standard
conventional languages are also somewhat cumbersome to use: identifiers
denote different values at different moments in a program's execution, and
expressions that are legal in one context (e.g., the Pascal enumerated
type) are often illegal in others.

In a functional language, one writes down programs by writing down
functions. The syntax of a typical functional language is simple and
uniform, for a variety of reasons:

*. recursive definitions are used instead of loops to implement repetition;
*. program syntax is consistent: replacing any expression in a well-formed
        functional program by any other expression having the same type yields a
        second well-formed program;
*. the language is referentially transparent: replacing any expression by any
        other expression of equal value yields an equivalent program (this
        property of functional languages is known as referential transparency:
        Quine coined the term in 1960, but the idea is due to Frege); and
*. side effects are disallowed: a variable's value is defined no more than
        once within a scope (by a "let" or "where" clause).

Parallel languages also expose a program's inherent parallelism: the task
of determining which subexpressions may evaluate in parallel is simplified
by the use of functions with let expressions, and by the requirement that
every identifier be bound to a unique value throughout a lexical scope.
Finally, the task of reasoning about functional programs is simplified by
referential transparency; by the absence of state changes; and by a
semantic requirement that evaluation proceed in an order-insensitive

Standard functional languages, however, are not as simple to reason about
as one would wish. Every type domain T in a standard functional language
has a special element, _|_::T (pronounced 'bottom of type T'), that
represents the value of a failed program. For example, expressions such
as "3/0" or "let f(n) = 1+f(n) in f(1)" denote the value _|_::int. Turner
refers to such languages as weak FP languages, presumably because they
exhibit the weak Church-Rosser property (see below). Reasoning about the
behavior of weak FP languages, unfortunately, is complicated by the
possibility of error. For example, an expression 'e-e', where e is of
type int, may denote 0 or _|_::int. The expression 'false and _|_::bool'
may denote false or _|_::int, depending on whether 'and', by definition,
may skip the evaluation of its right-hand argument. The presence of error
complicates the standard rules of inference: e.g., the principle of
induction, which must be cooked to account for _|_. Finally, permissible
variations in the order in which a computation is performed yield new
computations that give the same result, *or* _|_. This property of weak
FP languages, the weak Church-Rosser property, is not as clean as strong
Church-Rosser--the assertion that varying a computation's order of
evaluation leaves its outcome unchanged.

Turner proposed that more research be directed into what he referred to as
*strong* FP languages--FP languages where every function is total
(everywhere non-_|_). Such languages, said Turner, have the standard
advantages of functional languages, *plus* simpler strategies for
reasoning about program correctness. This idea is not new: Per Martin-Lof
in the 1970's advanced a constructive type theory that featured dependent
types and an isomorphism between propositions and data. Martin-Lof's type
theory, however, is still not elementary enough for most programmers to
master. Turner, therefore, intends to explore a simpler approach to
building terminating programs that uses a strongly terminating subset of
Hindley-Miller type theory. Type variables will be allowed, but dependent
types will be ruled out, and programs and proofs will be kept separate.

In the language that Turner envisions, all primitive operations would be
total. Array subscripting, for example, must be total--sentinels would be
returned for out-of-bounds subscripts--and expressions like 0/0 would
evaluate to 0. [Turner recommended a paper by Runciman, which shows that
no important theorems are invalidated by this assumption.] Recursive data
type definitions like

        array alpha = array(nat -> alpha)
        list alpha = nil | cons alpha (list alpha)

are acceptable; contravariant type definitions like

        maniac = Mad(maniac -> alpha)

where the type name appears to the left of an arrow operator on the
right-hand side of the equation are not allowed. [This can be used to
introduce the Y combinator into the language.] Finally, all recursions
must be provably well-founded at compile time. To make such checks
practical, Turner would require that b be a syntactic subcomponent of a
when f(a) calls f(b). This rule, which can be generalized to multiple
parameters, forces the programmer to use only primitive recursive
functionals (not just functions) of finite type. This includes every
function that can be proved total in first-order logic, and the Peano

This final restriction leads to the first of two objections to Turner's
research program: i.e., that the resulting language is not Turing
complete. Turner, however, argued that objects that are left out have
such unbelievably high complexity that they must in some sense be
infeasible to compute. The kinds of algorithms that programmers work with
would have to change; Turner, however, conjectures that about 80% of all
recursion is primitive recursion-- and hypothesizes (he used the phrase
"half-baked idea" several times) that every non-primitive recursive
algorithm can be converted into an algorithm of equivalent efficiency with
the right intermediate data structure. Quicksort, for example, can't be
written as a primitive recursive algorithm. One can, however, can perform
equivalent tree sorts; the tree sorts, in effect, capture the control
structure. Again, the algorithm for computing x^n that multiplies x^(n/2)
by x^(n/2) is not primitive recursive when it descends from x^n to
x^(n/2), but the equivalent algorithm that performs the computation on
lists of bits is. One counterexample is Euclid's algorithm for computing
the GCD, but GCD could, as a fallback position, be made a primitive
function in the language.

These restrictions on the language lead to "incredibly straightforward"
proofs of program correctness that start with program equations as axioms,
and use structural induction throughout. Said Turner, "You can throw away
your textbook on domain theory: you won't need it."

A second objection to strong FP is that it is not possible to write an
operating system if all functions terminate. This objection, however, can
be met by introducing co-data--roughly speaking, well-behaved infinite
lists-- into the language. Codata is defined by equations over types that
produce maximal rather than minimal fixpoints: for example,

        co-list alpha => co-nil | co-cons alpha (co-list alpha)

In co-well-founded co-recursion, definitions ascend on their starting
point: co-recursive definitions, in other words, build infinite
structures. The following are two definitions of infinite lists:

      ones = co-cons( 1 , ones ) [ones is an object of type co-list nat]

      f a b = co-cons(a , f b (a+b))
      fibs = f 0 1 [fibs is an object of type co-list nat]

The introduction of co-data leads naturally to a principle of co-induction
that can be used to prove infinite structures equal. This extended
language also supports a generalized version of the strong Church-Rosser
principle: one that 'ignores' unconsumed data in infinite lists. There are
examples of infinite lists, like the list of all primes, that are
difficult to formulate efficiently in this formalism, but this is also a
matter for further research.

Turner concluded the talk by observing that Turing's Theorem on
programming language completeness left the programming languages community
with a choice between security and universality: it told the world that it
must choose between languages in which all programs could be proved
correct, and languages in which all computable programs can be expressed.
Thirty-five years ago the community chose universality; it may, said
Turner, be time to reconsider this choice.

Panel Discussion

The four speakers fielded a variety of questions from the audience. What
follows is a synopsis of what I believe were the highlights of the

Stroustrup said that the most important need for programming in the large
is support for program maintenance. A study of program maintenance
conducted by Stroustrup over a 3-5 year period showed that going from C to
C++ halved the cost of maintenance: bug reports were almost halved, and
the number of lines altered in changes dropped dramatically as well. Much
more work needs to be done on support for design changes.

It is not surprising, said Stroustrup, that libraries would be difficult
to learn, but that is not particularly the fault of overloading,
subclassing, and superclassing: some libraries--even well-written
ones--are more complicated than the average programming languages.

The panel concurred that universities must teach both declarative and
imperative styles of programming, given current practice, but felt
strongly that a user's first language should be declarative. This is how
computer languages are taught in Australia and Europe. High school
students, said Shapiro, find Prolog much, much easier to learn than
professors would like to believe. A pilot program in Israel that teaches
high school students to program in Prolog, said Shapiro, has a 60% success
rate; some students, said Shapiro, managed to develop sophisticated and
deep expert systems in Prolog after three to four years' worth of work in
the language.

One speaker recommended a book by Morris Kline, "Why the Professor Can't

There is work being done to integrate the functional and logic-based
approaches to FP: Shapiro mentioned a language called lambda prolog which,
he said, is "an amazingly powerful tool."

A member of the audience wondered about how many fresh ideas in
programming languages had been developed since around 1970. Stroustrup
replied that his major concern was not what's been developed since 1970,
but rather getting industry to embrace ideas developed in the 1960s. "We
can all agree", said Stroustrup, "that [current working practices] stink,
and that we need something better than C, but I lose sleep about how we
get from there to here."

Closing Observations

This synopsis does not give the reader a feel for the presentations
themselves. Naur's talk, the slowest of the four, was something of a
disappointment: the insights he had to share seemed important, but the
talk, as presented, could easily have been given in a half hour. [One
slide, as I recall, was used for the entire presentation.] The incident
reminded me of a time when I went to see Segovia, then an octagenarian,
play guitar; when I told a friend that Segovia did not play as well as
other, younger guitarists I had seen, this friend remarked that it was a
wonder that Segovia, at that age, was still giving concerts.

I enjoyed each of the other three talks immensely. Shapiro's talk *moved*;
I had a difficult time keeping up with the notes. Stroustrup's talk was
the best attended and probably the best received; Stroustrup is a fine,
fine speaker, and is well worth hearing if you have the chance. I was
also impressed by Turner's talk, and, incidentally, by my just being able
to follow much of what he was saying; apparently, I learned something from
seven year's worth of graduate study in programming languages [:^)]. Paul
Hudak, the moderator, did not seem impressed with Turner's argument that
languages that allow users to make errors should be abandoned, because
they are difficult to reason about; then again, Hudak has made a career
out of reasoning about errors, and one might not expect him to warm
quickly to such a thesis.

The one jarring moment of the symposium came near to the beginning of the
panel discussion, when David Turner read a prepared statement in which he
protested his inclusion in a panel discussion with Shapiro. Turner, In
this statement, objected to the Israeli government's recent 'murders'
[Turner's word] of Palestinians who were protesting the deportation of
other Palestinians, in violation of a U.N. resolution to the contrary.
Turner, I believe, was objecting to Shapiro's presence at Yale on the
basis of Shapiro's connections with the government of Israel as a member
of the Weizmann Institute. At the very end of this statement, Turner
added--as an afterthought--that he thought well of Shapiro's work, and had
nothing against him personally. Shapiro, who was as taken aback as the
audience seemed to be, said that he would refrain from comment, but did
not address Turner personally, nor make eye contact with Turner for the
rest of the symposium. During the trip home, I told my students that
Turner's behavior, however heartfelt, was abrasive, thoughtless, and
unkind. Had Turner, I argued, felt the need to make such a statement, he
could have easily have praised Shapiro as an individual first, and then
followed that up with a statement of concern about the actions of the
Israeli government. Alternatively, he could simply have bypassed the
final panel discussion, and asked Hudak, the moderator, to read a
statement of the sort that I just described. I also speculated on how
Turner would have felt if someone had made a similar statement about any
one of a number of actions taken by Great Britian during the twentieth
century, none of which were in his power to influence. [Shapiro, like
many of his contemporaries, is an officer in the Israeli army, but I can't
see how that's relevant either. -John]

I am happy to report that I cannot reproduce the text of Turner's remarks
from memory. I am somewhat more wistful about not having been able to
transcribe what other speakers had to say about other topics--though I did
manage to scribble down a few more scintillating remarks that were not
quoted in the first part of the report. Stroustrup and Naur, for example,
gave interesting--and similar--definitions of programming languages:

*. "A programming language is a set of conventions that a human computer
          user must take to make the computer do what is required in a certain
          context." --Naur

*. "A programming language is whatever we program our computers in."
        -- Shapiro

The following two quotes, which also struck me, did not fit naturally into
the first part of the report:

*. "There's an old story about the person who wished his computer were as
          easy to use as his telephone. That wish has come true, since I no
          longer know how to use my telephone." --Stroustrup

*. "The evolutionary construction of layers of abstraction may be the only
          real (or is that virtual?) output of computer science." -- Shapiro


Another symposium is planned for next year. Information is available from
Judy Smith at Yale University (

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.