Re: Effectiveness of compilers today

pardo@cs.washington.edu (David Keppel)
Tue, 16 Feb 1993 23:06:06 GMT

          From comp.compilers

Related articles
Effectiveness of compilers today andreasa@dhhalden.no (1993-02-14)
Re: Effectiveness of compilers today pardo@cs.washington.edu (1993-02-16)
Effectiveness of compilers today kanze@us-es.sel.de (1993-02-17)
Re: Effectiveness of compilers today jpab+@andrew.cmu.edu (Josh N. Pritikin) (1993-02-17)
Re: Effectiveness of compilers today burley@apple-gunkies.gnu.ai.mit.edu (1993-02-17)
Re: Effectiveness of compilers today jbuck@forney.berkeley.edu (1993-02-17)
Re: Effectiveness of compilers today napi@cs.indiana.edu (mohd hanafiah abdullah) (1993-02-17)
Re: Effectiveness of compilers today moss@cs.cmu.edu (1993-02-18)
[10 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: pardo@cs.washington.edu (David Keppel)
Keywords: performance
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: 93-02-082
Date: Tue, 16 Feb 1993 23:06:06 GMT

andreasa@dhhalden.no (ANDREAS ARFF) writes:
>[Who produces fastest code: compiler or human?]


John Levine writes:
>[A human can almost always meet or beat a compiler on small chunks.]


For extremely small chunks, Superopt nearly-exhaustively checks all
instruction combinations and is hard to beat.


See:


    T. Granlund and R. Kenner, "Eliminating branches using a
    superoptimizer and the GNU C compiler," ACM SIGPLAN '92
    Conf. on Prog. Lang. Design and Impl., June 1992, SF CA.
    [work done on IBM RS/6000; Kenner's email address is
    kenner@vlsi1.ultra.nyu.edu]


I've forgotten where Henry Massalin's original Superoptimizer paper
appeared, but it *may* be available as an online TR from
`cs.columbia.edu'.


It is usually hard for a human to do *worse* than a compiler because the
human can simply use the compiler to produce code then examine the code
for possible improvements. Humans are good enough at pattern matching
that they can usually find improvements that the compiler missed.


At the risk of beating dead horse (power):


In my (limited) experience it is ``easy'' to get a given code fragment to
go 10-20% faster, but it's rarely worthwhile: if a routine is 10% faster
and program spends 10% of its time there you've just sped up the program
by 1%. While there are certainly programs where this is worthwhile, they
aren't the ones that I usually work on.


Often, one can stare at the source code for a while and figure out ways to
``trick'' source code to produce faster code -- by manually unrolling
loops, for example. Although this suffers from some of the same problems
as writing assembly (optimizations for one machine are often
pessimizations for another machine), such source code `tweaks' are often
more robust, portable, and reliable.


I think there was a time when the majority of the community accepted
``unsafe at any speed.'' Today, I think most people have given up on
assembly for all but the most key routines because even machines that give
binary compatability often have better performance if you recompile with a
compiler that knows about the machine. Assembly code can't simply be
recompiled[*], it has to be rewritten. Thus, I think the focus, even
among performance freaks, has changed to one of trying to expose the best
information for the compiler and writing the best compilers.


[*] Unless you have an optimizing assembler, in which case you're just
writing in a very low high-level language and you're using a compiler!


Of course I'm a compiler person, so it's expectable I'd hold some such
opinion.


;-D on ( Recoding a dead horse ) Pardo
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.