Re: Register optimizations (Preston Briggs)
Sun, 12 Mar 1995 02:46:54 GMT

          From comp.compilers

Related articles
Register optimizations (1995-03-02)
Re: Register optimizations (1995-03-05)
Re: Register optimizations (1995-03-08)
Re: Register optimizations (1995-03-09)
Re: Register optimizations (1995-03-12)
Re: Register optimizations (1995-03-16)
Re: Register optimizations (1995-03-18)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Preston Briggs)
Keywords: registers, optimize
Organization: Compilers Central
References: 95-03-025 95-03-038
Date: Sun, 12 Mar 1995 02:46:54 GMT

>I know there is a ton of literature on this but it would be nice if
>there were a suggestion or two for where a relative beginner should
>start. Many of us wrote 'tiny' compilers in school that generated
>assembly code and could be used for real problems. What would be
>a managable next step for improving these compilers and learning
>a little more about register allocation and code optimization?

I think the big thing is to have a plan for your compiler before you
start. If you want to do a lot of optimization (and who wouldn't :-),
then you'll want to plan for it from the beginning. Unfortunately,
it's hard to plan for directions you haven't yet explored, so I like
to read and learn from others experiences. I'll write a bit more
below, but I recommend, very highly, that you chase down some papers.
My favorite, as far as overall direction for an optimizing compiler,
especially for a research or hobbyist effort, is

    title="An Overview of the {PL.8} Compiler",
    author="Marc A. Auslander and Martin E. Hopkins",
    note=pldi82, -- the SIGPLAN 1982 Symposium on Compiler Construction

In brief, here's a direction to work toward. Think of your compiler
as having three parts: front-end, optimizer, and back-end.

The front end should scan, parse, do error checking, and emit
intermediate code. The back end should consume intermediate code,
choose instructions and do register allocation, and emit assembly
code. If you do a good job with the design of your intermediate form
(very hard work), you ought to be able to totally ignore optimization
and have the front end and back end communicate only through the
intermediate (no symbol table, auxilary files, etc).

The front end should be resolutely non-optimizing. Just emit
intermediate code in as simple and straightforward a fashion as
possible. When designing your intermediate code, allow an infinite
number of registers (let the register allocator worry about cleaning
up the mess). Have the front end use registers for everything it can
-- local variables, parameters, expression temporaries, addressing
computations, etc. Don't bother trying to track their lifetimes so
that you can reuse them; that's the job of the allocator.

I like an intermediate form that's quite low level, sort of resembling
RISC code. Typical operations would look like this

INTEGER-ADD r75 r100 => r43
FLOAT-SUB r16 r15 => r1024
JUMP label
label: COMPARE r1 r2 => r3
BRANCH r3 lt true-label false-label
INTEGER-LOAD r33 => r456 -- load fron where r33 points
INTEGER-DOUBLE r456 => r100 -- convert
DOUBLE-STORE r100 r200 -- store r100 where r200 points

and so forth. My preference for intermediate instructions is that
they be lower level (more primitive) than any assembly language.
Then, the task of instruction selection in the back end is to combine
primitives to give reasonable assembly instructions. Lots of papers
on this; I like the ones by Davidson and Fraser.

For register assignment, I like Chaitin's work. His stuff, along with
the paper above, really set the tone for everything I've tried to
build. To read about how it works and how to built it, get a copy of
my thesis (you'll also get a fair dose of my opinions about compiler
construction). You can grab it via anonymous ftp from, in
the directory public/preston

After you build the front end and back end, then you can start having
some real fun, building pieces of the optimizer. The optimizer should
be built in pieces. I use many passes, each doing one transformation.
Each pass inhales the intermediate code for 1 routine, builds a
control-flow graph, does something to the code, and then exhales the
result. Try and build a skeleton that does exactly this, and then you
can use it for every pass.

Start with something simple like dead code elimination. Lots of ways
to do this (and everything else) -- check the literature for the way
that appeals to you. Then maybe value numbering and/or constant
propagation. Then continue, in whatever direction your interests take
you. I recomend looking at the resulting code a lot, since it'll
initially have lots of room for improvement. Also be sure to scrap
the whole thing and start over now and then. Hard to do maybe, but
how else can you take advantage of all your experience?

>It'd be interesting in knowing what others feel (from experience)
>are the classic optimizations worth pursuing for the hobbyist/hacker. Put
>another way, if you had to limit your optimizer to 1000 lines of
>code what strategy would you choose?

1000 lines probably isn't enough to do everything efficiently.
Perhaps if you ignore issues of speed (mainly algorithmic complexity),
you could do quite a lot relying on lists for all your data structures
(especially if coding in Scheme or something similar).

Classic optimizations are dead code elimination, constant propagation,
value numbering, strength reduction, and elimination of partial
redundancies (which includes global common subexpressions and loop
invariant code motion).

If I were aiming at small and fast, I'd convert to SSA, do value
numbering and constant propagation in a walk over the dominator tree
(can actually be done as part of the conversion to SSA), then dead
code elimination, and finally register allocation. Unfortunately, the
register allocators I like are fairly hefty and would consume more
time and code space than the rest of the compiler. To balance the
effort, I'd go with a cheaper allocator or a more aggressive

Preston Briggs

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.