Re: Justifying Optimization (Michael Tiomkin)
30 Jan 2003 00:03:40 -0500

          From comp.compilers

Related articles
[8 earlier articles]
Re: Justifying Optimization (Jan C. =?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Jan C.=?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Jan C.=?iso-8859-1?Q?Vorbr=FCggen?=) (2003-01-25)
Re: Justifying Optimization (Joachim Durchholz) (2003-01-26)
Re: Justifying Optimization (2003-01-26)
Re: Justifying Optimization (2003-01-29)
Re: Justifying Optimization (2003-01-30)
Re: Justifying Optimization (Lex Spoon) (2003-02-05)
Re: Justifying Optimization (Robert A Duff) (2003-02-05)
Re: Justifying Optimization (2003-02-06)
Re: Justifying Optimization (2003-02-06)
Re: Justifying Optimization (Joachim Durchholz) (2003-02-11)
Re: Justifying Optimization (Joachim Durchholz) (2003-02-11)
[5 later articles]
| List of all articles for this month |

From: (Michael Tiomkin)
Newsgroups: comp.compilers
Date: 30 Jan 2003 00:03:40 -0500
References: 03-01-088 03-01-119
Keywords: optimize, practice
Posted-Date: 30 Jan 2003 00:03:40 EST (John Dallman) wrote in message news:03-01-119...
> > I've been developing for twenty years, and ever since I've been
> > allowed to have an opinion I've insisted that code ready for final
> > testing and deployment be optimized. I'm currently responsible for my
> > program's development strategy, and was recently blindsided by
> > resistance to this approach. Developers are stating that optimized
> > code produces errors and makes debugging more difficult.

    Well, it seems that your programmers remember the compilers that
existed 20 years ago. I recall a Microsoft C compiler (v. 6.0?) where
people said that if you want your program run correctly, don't use
optimization. Unfortunately, that compiler was also wrong without
optimization if you needed IEEE compliant floating point computation.
The modern compilers are usually much better.
    Debugging optimized code is a huge problem even now, in spite of
debuggers designed for optimized code. The problem is that in an
optimized code you never know where are your variables' values and
source lines.

> Optimisers sometimes introduce errors, via bugs in them, but they also
> show up errors that haven't previously been noticed. If the final product
> is to be optimised, then it's absolutely necessary to do the serious
> testing on an optimised build. Trusting that an optimiser will never
> expose any correctness issues is just plain foolish.
> Initial development and the debugging that comes with that can be done on
> less- or non-optimised builds, but once you've integrated and are testing,
> you have to use something identical to the build that will be run in
> production.
> Overall, I suspect your programmers are being lazy (in the non-virtuous
> way).
> > While true that debugging is made more complicated when optimization
> > is used, I'm not considering that enough justification to avoid
> > optimization.
> Dead right.
> > The other argument, that optimization produces errors, is the one that
> > is new to me. While I've not had any personal indication of this, I
> > don't have any hard facts.
> They're right, occasionally. Showing up their errors is at least equally
> common, and much more humiliating for them.
> > Is there any substantiated data that says optimized code is more prone
> > to errors? Is there a generally accepted guideline in the community
> > that says when you should/should-not optimize?
> See above for both of those.
> > Is there a general level of compiler technology so that I can say that
> > I'd gain ~x% by optimizing?
> It depends, a lot, on the architecture you're targeting, the style of the
> software being written, and so on. That said, on any modern general-
> purpose architecture I'd expect to double throughput with a decent
> optimiser, and often do better. On one occasion, with Forte 6.2 on 32-bit
> Solaris 7, I picked up 30% more throughput just by allowing Forte to use
> UltraSPARC instructions.

    It depends on the system you're using. With C or C++, on a RISC you
can easily get between 100% and 200% improvement or more than that.
On an x86 you can get a negative 'improvement' because the processor
itself makes a lot of optimizations (e.g. out-of-order execution).

    Optimized code needs a different style of programming, and this is a
cultural issue. If your programmers wrote unoptimized code, they might
need re-education on writing optimizable code. I think that learning
optimized design/programming might be the first step in moving them in
this direction.
    This can be self-evident for a programmer who always looks for
performance improvements, but for a person who is happy to write a
program that works correctly, this is a huge difference. I recall how
amazed was my coworker when he saw that inlining 5-6 low level methods
of frequently used class made 30% improvement in a large program
compiled with '-O2' on a RISC - eliminating function calls and making
larger basic blocks which improved scheduling.


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.