|Code Opt. in One-pass Compilation firstname.lastname@example.org (1996-07-24)|
|Re: Code Opt. in One-pass Compilation email@example.com (Tom Hawker) (1996-07-26)|
|Re: Code Opt. in One-pass Compilation firstname.lastname@example.org (1996-07-27)|
|Re: Code Opt. in One-pass Compilation email@example.com (1996-07-31)|
|Re: Code Opt. in One-pass Compilation firstname.lastname@example.org (1996-08-04)|
|Re: Code Opt. in One-pass Compilation email@example.com (Henry Spencer) (1996-08-09)|
|From:||firstname.lastname@example.org (Peter Froehlich)|
|Date:||4 Aug 1996 00:33:38 -0400|
|Organization:||AMIGA CITY - Public Amiga-BBS, Munich, Germany|
|References:||96-07-176 96-07-187 96-07-192|
email@example.com (Henry Baker) wrote:
> I haven't seen very much coverage of _small_ and _fast_ compilers in the
> compiler literature. I guess it's difficult to get tenure when you work
> on something that practical. ;-) ;-) ;-)
Well, check out Project Oberon and the associated compilers for Oberon
and Oberon-2. Even if the standard Oberon-2 compiler OP2 produces an
intermediate AST, it's still small and fast when compared to compilers for
languages like C or ADA. This is mainly because of the language design
(IMHO) but also because they use resources quite economically.
Regarding optimization, check out the doctoral thesis by Marc Brandis
"Optimizing Compilers for Structured Languages" (ftp://inf.ethz.ch) in which
he presents a very good intermediate code (guarded single assignment form)
and shows how it can be constructed in only one pass.
But generally I think that you should decide on a proper architecture for
every software project, and for an optimizing compiler, the proper
architecture is to use some kind of intermediate representation. Pure
one-pass attempts at optimization seem "futile", to cite the Borg.
Return to the
Search the comp.compilers archives again.