Re: Speedy compilers

Janusz Szpilewski <>
30 Nov 1998 02:12:59 -0500

          From comp.compilers

Related articles
[3 earlier articles]
Re: Speedy compilers (Jack W. Crenshaw) (1998-11-19)
Re: Speedy compilers (Toon Moene) (1998-11-21)
Re: Speedy compilers (Joachim Durchholz) (1998-11-24)
Re: Speedy compilers (Andrew Fry) (1998-11-24)
Re: Speedy compilers (Robert Bernecky) (1998-11-24)
Re: Speedy compilers (1998-11-30)
Re: Speedy compilers (Janusz Szpilewski) (1998-11-30)
Re: Speedy compilers (Amit Patel) (1998-12-10)
Re: Speedy compilers (1998-12-13)
Re: Speedy compilers (1998-12-18)
Re: Speedy compilers (1998-12-18)
Re: Speedy compilers (1998-12-18)
Re: Speedy compilers (1998-12-18)
[7 later articles]
| List of all articles for this month |

From: Janusz Szpilewski <>
Newsgroups: comp.compilers
Date: 30 Nov 1998 02:12:59 -0500
Organization: Alcatel Polska
References: 98-11-047 98-11-091 98-11-118
Keywords: performance

Andrew Fry wrote:
> The only code optimizations I see it
> doing regularly are; constant folding; strength reduction; and dead
> code removal.
> Now if they extended it to include common sub-expression and loop
> invariant code recognition, I would expect to see much faster and
> slightly smaller binaries.

Such optimizations were present in the 16 bit Borland Pascal/C++
compilers. But they have always been implemented in a rather basic
manner and dealt with simple cases mainly resulting from a bad
programming style. It does not help much with improving the code
speed/size factors. Such a kind of optimization can be even applied in
compilers written as student end term projects (I remember one).

For comparison, the optimization routines applied in MS VC++ are more
advanced. Strength reduction, basically understood as replacing
multiplication with less time expensive addition can also be performed
in the opposite way, if under given circumstances (algorithm,CPU)
direct multiplication may appear more efficient. That effect can be
seen if someone tries to write a multiply function coded as adding in
a loop.

Also constant propagation is implemented more efficient and you can
find entire functions calls replaced with the already derived results
if they are invoked with constant parameters and do not use external
scope variables.

It seems that with moving to 32 bit compilers Borland even reduced the
applied kinds of code optimization. Probably it is a business
decision. Today Borland (well Inprise) products are highly
specialized and targeted to enterprise level programmers (that's what
Inprise stands for) equipped with strong PCs and usually having short
deadlines. They appreciate the most the possibility of fast and easy
application building what means possibility of reusing existing
components and fast compilation. Borland/Inprise with its visual tools
(Delphi, C++Builder, JBuilder) meets those requirements really well.
On that field, with better and better hardware available, gain in
program size/speed due to optimization disappears compared to benefits
from hardware upgrades. So costs of maintenance of complex
optimization routines may not be worth of achieved results and it is
not what their clients expect from them. For that reason I think we
cannot expect any significant code optimization improvement in Borland

As it concerns the main discussion, we can notice that the Borland
compilers seem to be an example of a different attitude to the compile
speed/code optimization problem.

Nevertheless there exist many domains where code optimization is
highly desirable. It concerns multimedia or telecom sectors where is
applied real time programming or are used specialized systems with
reduced resources. In such cases optimization helps with writing clear
and reusable code (what often means source code redundancy) and
fitting to time/system boundaries.

At the end, I think it is a good thing to have a possibility of
choosing a fast or well code optimizing compiler depending on the
application domain.

Janusz Szpilewski

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.