Re: Have we reached the asymptotic plateau of innovation in programming la

glen herrmannsfeldt <gah@ugcs.caltech.edu>
Wed, 14 Mar 2012 05:19:36 +0000 (UTC)

          From comp.compilers

Related articles
[14 earlier articles]
Re: Have we reached the asymptotic plateau of innovation in programmin gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-03-12)
Re: Have we reached the asymptotic plateau of innovation in programmin gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-03-12)
Re: Have we reached the asymptotic plateau of innovation in programmin haberg-news@telia.com (Hans Aberg) (2012-03-13)
Re: Have we reached the asymptotic plateau of innovation in programmin cr88192@hotmail.com (BGB) (2012-03-13)
Re: Have we reached the asymptotic plateau of innovation in programmin robin51@dodo.com.au (robin) (2012-03-11)
Re: Have we reached the asymptotic plateau of innovation in programmin jthorn@astro.indiana-zebra.edu (Jonathan Thornburg \[remove -animal to reply\]) (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin gah@ugcs.caltech.edu (glen herrmannsfeldt) (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin torbenm@diku.dk (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin torbenm@diku.dk (2012-03-14)
Re: Have we reached the asymptotic plateau of innovation in programmin cr88192@hotmail.com (BGB) (2012-03-15)
Re: Have we reached the asymptotic plateau of innovation in programmin federation2005@netzero.com (Rock Brentwood) (2012-03-17)
Re: Have we reached the asymptotic plateau of innovation in programmin cr88192@hotmail.com (BGB) (2012-03-18)
Re: Have we reached the asymptotic plateau of innovation in programmin mailbox@dmitry-kazakov.de (Dmitry A. Kazakov) (2012-03-18)
[27 later articles]
| List of all articles for this month |

From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Newsgroups: comp.compilers
Date: Wed, 14 Mar 2012 05:19:36 +0000 (UTC)
Organization: Aioe.org NNTP Server
References: 12-03-012 12-03-014 12-03-022 12-03-027 12-03-030
Keywords: design, history
Posted-Date: 14 Mar 2012 22:10:38 EDT

BGB <cr88192@hotmail.com> wrote:


(snip)
> in such a scenario, language convergence will have been so widespread
> that people may have, by this point, ceased to clarify which language
> they were using, since most would have become largely copy-paste
> compatible anyways.


Well, first there is the division between interpreted languages, such
as Mathematica, Matlab, S/R, ..., and for that matter, Excel, that are
good for quick one time problems, and compiled languages that are
faster when you want to do something many times.


It seems to me that division will stay, though the languages could
still tend to converge.


(snip)


> what about a language which is more complex than JavaScript, maybe
> roughly on-par with C or Java, and generally simpler than C++ and C# ?
> what about a VM where the bytecode has 100s of unique operations?
> ...


The bytecode question should be independent of the language, and
should be optimized, as RISC processors are, for compiler generated
code instead of human written assembly code.


> but, OTOH:
> in a language like JavaScript you can type "a+b*c", and get the expected
> precedence.


> this is different than typing, say (in Scheme):
> "(+ a (* b c))"
> or (in Self):
> "b * c + a" (noting that "a + b * c" will give a different answer).


Well, first there are a number of precedence cases in C that many
would have done differently. But also there is the question of which
should be an operator and which a function. Note that C has the %
operator and pow() function, where Fortran has mod() and **.


> as I see it, the sorts of minimalism where one can't "afford" to have
> things like operator precedence, or including conventional control-flow
> mechanisms, is needless minimalism.


Then there are funny little differences, where language designers try
to force a programming style. Fortran added the ability to use
floating point variables as DO loop variables in Fortran 77, then took
it away again in Fortran 90.


Another difference is the abilty to use array or structure expressions
instead of explicit loops.


> most real programmers have better things to do than sit around working
> around awkward ways of expressing arithmetic, and figuring out how to
> accomplish all of their control flow via recursion and if/else (and so
> help you if you want something like sane file IO, or sockets, or
> threads, ...).


But each time you make something easier to use, something else usually
gets at least slightly harder. You don't really want 200 different
operators, with way too many precedence levels. It is just too hard
for humans, not that it bothers compilers much at all.


> (and, it is not necessarily a good sign when things like loops, file IO,
> arrays, ... are supported by an implementation as... language
> extensions...).


Well, yes, but how many different loop constructs do you need?


> but, many people hate on C and C++ and so on, claiming that languages
> "should" have such minimalist syntax and semantics. however, such
> minimalist languages have generally failed to gain widespread acceptance.


> likewise, although a person can make an interpreter with a small number
> of total opcodes, typically this means the program will need a larger
> number of them to complete a task, and thus run slower.


OK, but high-level language design should be toward making things
easier for humans. Unless there is a big trend back toward assembly
programming, instruction sets, including byte-code interpreters, should
be designed for faster machine processing, and not for people.


(It is useful in debugging if it isn't too hard for humans to read,
but most of the time that shouldn't be needed.)


> for example, a person could make an interpreter with roughly 3 opcodes
> which fairly directly implements lambda calculus... but it will perform
> like crap.


> 10 or 15 is a bit better, then one probably at least has "the basics".


Well, you can have the data tagged such that only one add instruction
is needed to add any size or type (byte, short, int, long, float,
double, etc.) or separate opcodes for each. It doesn't make a huge
difference, but likely you could see the performance difference.


> with several hundred opcodes, arguably a lot of them are "redundant",
> being easily expressible in terms of "simpler" opcodes, but at the same
> time, a single opcode can express what would otherwise require a chain
> of simpler opcodes.


The RISC vs. CISC argument in processor architecture.


> like, "wow, there is this here 'lpostinc' opcode to load a value from a
> variable and store an incremented version of the value back into the
> variable". is this opcode justified vs, say: "load x; dup; push 1;
> binary add; store x;"? I figure such cases are likely justified (they do
> tend to show favorably in a benchmark).




> OTOH, a person can go too far in the other direction as well.


Like VAX.


(snip)


-- glen


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.