Re: language design after Algol 60, was Add nested-function support

Martin Ward <martin@gkc.org.uk>
Tue, 27 Mar 2018 14:46:41 +0100

          From comp.compilers

Related articles
Re: language design after Algol 60, was Add nested-function support martin@gkc.org.uk (Martin Ward) (2018-03-27)
Re: language design after Algol 60, was Add nested-function support anton@mips.complang.tuwien.ac.at (2018-03-30)
Re: language design after Algol 60, was Add nested-function support martin@gkc.org.uk (Martin Ward) (2018-04-06)
Re: language design after Algol 60, was Add nested-function support derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2018-04-08)
Re: language design after Algol 60, was Add nested-function support gneuner2@comcast.net (George Neuner) (2018-04-09)
Re: language design after Algol 60, was Add nested-function support anton@mips.complang.tuwien.ac.at (2018-04-10)
Re: language design after Algol 60, was Add nested-function support derek@_NOSPAM_knosof.co.uk (Derek M. Jones) (2018-04-10)
[28 later articles]
| List of all articles for this month |

From: Martin Ward <martin@gkc.org.uk>
Newsgroups: comp.compilers
Date: Tue, 27 Mar 2018 14:46:41 +0100
Organization: Compilers Central
References: <6effed5e-6c90-f5f4-0c80-a03c61fd2127@gkc.org.uk> 18-03-042 18-03-047 18-03-075 18-03-079
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970"; logging-data="55937"; mail-complaints-to="abuse@iecc.com"
Keywords: history, design
Posted-Date: 29 Mar 2018 17:27:58 EDT

On 20/03/18 09:06, Anton Ertl wrote:
> Higher-order functions (I somehow mixed up "higher-order" with
> "first-class" in the above) were already available in IPL, before
> Algol 60, which falsifies this theory for this feature.


The theory is that "the attitude of the Algol 60 designers towards
language design is what led to these innovations appearing".
Clearly, this "attitude" was present before, during and after
the development of Algol 60. In the context of the theory,
Algol 60 is just one example of the influence of attitude.


The second part of the theory is that since that time,
the attitude has changed and "language designers have become
very cautious and timid in specifying powerful new language features
and compiler research has stagnated."


> Concerning features that appeared after 1960, there is no way to
> verify or falsify your theory.


It should be easy to falsify the theory: where are the new language
features that have been invented in the last, say, twenty years?
Where are the powerful new languages which make Haskell
look like Dartmouth BASIC?


A very small example of language designers prioritising
implementation effort over power and simplicity: even today there
are very few mainstream languages which implement arbitrary precision
integers ("bignums") and rational numbers as primitive data types.
Proving that an algorithm is correct is simpler when you don't have
to worry about integer overflow, or floating point approximations.
For example, the binary search algorithm that Jon Bentley proved correct
and published in "Programming Pearls" in 1986 had a bug that
was not detected for two decades:


https://research.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html


It doesn't help that C and C++ have very lax rules for dealing
with overflow (in the sense that many cases are declared to
be undefined behaviour), leading to complex solutions for
simple problems:


http://www.di.unipi.it/~ruggieri/Papers/semisum.pdf


(I am still hoping for my theory to be proved wrong!)


A related, and more worrying, trend is the new and growing area
of research under the heading "empirical software engineering"
which aims to do away with program semantics altogether.
A program is deemed "correct" if and only if it passes its test suite.
Various automated and semi-automated ways of modifying the program
are being investigated: any modification which passes the test suite
is deemed to be "correct". For example, "empirical slicing"
may be defined as "delete random sections of code and call the result
a valid slice if it passes the regression test". Program semantics
and program analysis are considered to be "too difficult"
by these researchers, and therefore are not attempted.


Readers of comp.risks will no doubt already be wondering how
such methods avoid introducing security holes: given that
a security hole will not necessarily prevent the program
from passing its test suite (unless the tests happen
to include the carefully crafted data which triggers
the security hole!) As far as I can tell, the answer is:
they don't!


--
Martin


Dr Martin Ward | Email: martin@gkc.org.uk | http://www.gkc.org.uk
G.K.Chesterton site: http://www.gkc.org.uk/gkc | Erdos number: 4



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.