Re: Death by error checks.

hbaker@netcom.com (Henry Baker)
19 Dec 1995 14:16:03 -0500

          From comp.compilers

Related articles
[4 earlier articles]
Re: Death by error checks. jgj@ssd.hcsc.com (1995-11-28)
Re: Death by error checks. sethml@dice.ugcs.caltech.edu (1995-11-28)
Re: Death by error checks. Terry_Madsen@mindlink.bc.ca (1995-11-30)
Re: Death by error checks. cliffc@ami.sps.mot.com (1995-11-30)
Re: Death by error checks. sethml@avarice.ugcs.caltech.edu (1995-12-09)
Re: Death by error checks. veeru@hpclearf.cup.hp.com (Veeru Mehta) (1995-12-17)
Re: Death by error checks. hbaker@netcom.com (1995-12-19)
heuristic optimizations [was "Death by error checks."] mcintosh@rice.edu (1995-12-19)
Performance Regressions; Previously: Death by error checks. cdg@nullstone.com (1995-12-28)
Re: Performance Regressions; Previously: Death by error checks. hbaker@netcom.com (1995-12-30)
| List of all articles for this month |

From: hbaker@netcom.com (Henry Baker)
Newsgroups: comp.compilers
Date: 19 Dec 1995 14:16:03 -0500
Organization: nil organization
References: 95-10-103 95-11-192 95-12-080
Keywords: optimize

Veeru Mehta <veeru@hpclearf.cup.hp.com> wrote:


> I believe the goal of compiler writers should be to help the
> programmer rid of low level optimization details. One of the main
> advantages of a higher level programming language is that you get rid
> of the housekeeping jobs you had to do with a lower level
> language. This extends to any optimizations which can be taken care of
> in a mechanical way. Ideally the programmer should concentrate
> primarily on algorithmic improvements, but we see a lot of time
> consumed on improving an itsy bitsy hot-spot code; something that a
> profiler+compiler could do in a better way. In any case, to be
> completely sure, a programmer has to run a profiler anyway.


The programmer is never 'rid of low level optimization details', but
his job is transformed somewhat by 'smarter' compilers. Whereas
before he had to code a 'low-level' programming language very
carefully in order to make sure that it generated good performing
machine level code, he now has to code a 'high-level' programming
language very carefully in order to make sure that it generates good
performing machine level code. This means that he has to constantly
look at the assembly output of the compiler to make sure that it
hasn't done something completely harebrained on his particular
program.


Unfortunately, whereas before, he could rely upon the compiler staying
just about as stupid from release to release, he now has to contend
with compilers that get 'smarter' from release to release. This means
that if he gets a new version of his compiler he now has to check that
the compiler's new smarts haven't completely destroyed his previous
work.


If you have any experience with C++ in the last few years, you'll know
exactly what I'm talking about. Of course, some of these changes were
'quiet' changes required by the standard. 'Quiet' in this case means
that the program computes the same answer, but now takes 10X as long
to do it.


The problem with 'optimizations' is that they're typically in the mind
of the beholder. What someone on a standards committee or a compiler
project might consider an 'optimization' may or may not be an
optimization for a particular program. This problem has become more
acute since most of the 'optimizations' in the last 10 years or so are
not of the 'win-win' variety, but assume some standard statistical
profile of the code, and if your program doesn't conform to this
profile, you could get 'pessimized' instead.


In such a case, profiling will at least tell you that you have a
problem, although fixing the problem will likely involve looking at
the generated machine code, hypothesizing why and under what
conditions the compiler would be 'smart' enough to generate such
atrocious code, and then change the source in what appears to be a
completely inessential way. Needless to say, without access to the
compiler source or the compiler writer, this process can be _very_
slow. So much for 'smart' compilers improving productivity...


----


I will take to my soapbox, and again argue against those living in a
dream world who say that an 'optimization' _merely_ improves execution
speed. In most of the computer programs either I, or any of the
people I know, have written, performance is _always_ an issue lurking
in the background, and people rewrite their code in very substantial
ways to gain performance. Although 'algorithmic' improvements are
capable of providing many orders of magnitude, good quality code
generation (including selective inlining completely controlled by the
user) is often worth 3X - 10X, and substantial fractions of the total
programming effort are aimed at achieving a good deal less improvement
than this.


A compiler 'optimization' which cannot be understood or controlled by
a user is likely to end up being more of an irritation than much of a
help.


This is why a lot of the 'heuristic' optimizations found in compilers
work mainly for the standard benchmarks and some of the compiler
vendor's internal code, (or for some of the compiler vendor's most
favored customers!), but are pretty much a waste of effort for nearly
every one else.


Compiler 'optimizations' are kind of like all those weird exceptions
you find in the U.S. Tax Code. Many were put there for a single
highly visible constituent with a very persuasive lobbyist, but they
persist long after the need for them has evaporated -- e.g., the
factory moved to Mexico anyway, even after the generous tax exemption
was granted. No one ever bothers to check whether the optimization
actually benefits anyone; they simply assume that the optimization
must be there for a good reason.


--
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.