|Register optimizations ArielMars@aol.com (1995-03-02)|
|Re: Register optimizations firstname.lastname@example.org (1995-03-05)|
|Re: Register optimizations email@example.com (1995-03-08)|
|Re: Register optimizations firstname.lastname@example.org (1995-03-09)|
|Re: Register optimizations email@example.com (1995-03-12)|
|Re: Register optimizations firstname.lastname@example.org (1995-03-16)|
|Re: Register optimizations email@example.com.OZ.AU (1995-03-18)|
|From:||firstname.lastname@example.org (Anton Ertl)|
|Organization:||Institut fuer Computersprachen, Technische Universitaet Wien|
|Date:||Thu, 16 Mar 1995 17:22:35 GMT|
: I've been looking at a number of methods of using registers to speed up
: expression execution. It seems that great use could be made of registers,
: especially on RISC machines with lots of them available.
email@example.com (Bob Mercier) writes:
|> I know there is a ton of literature on this but it would be nice if
|> there were a suggestion or two for where a relative beginner should
|> start. Many of us wrote 'tiny' compilers in school that generated
|> assembly code and could be used for real problems. What would be
|> a managable next step for improving these compilers and learning
|> a little more about register allocation and code optimization?
Preston Briggs proposed approach is probably more ambitious than the
original posters had in mind. I will discuss some simpler things
here. I would not call the result an optimizing compiler, but you will
be able to write quite efficent programs in it (if you don't insist on
maximum readability). You would learn much about optimization and
register allocation during program writing, not so much during the
|> The dragon book has a good algorithm for register allocation for
|> a single expression, that is as long as you're not doing common
|> subexpression elimination.
I don't think this algorithm is really worth bothering. Most
expressions are quite simple, so the SU-Algorithm does not save
much. Just assume that the left tree needs more registers (and
therefore should be evaluated first if you want to save
registers). This works well on expressions like
Or you could do register allocation and instruction scheduling on the
basic block level.
|> 1. Allocate 10 registers for 'register' variables.
No, allocate 4-6 registers for expressions (experiment for the best
number); the assembler/linker/OS/run-time-system will also want a few
registers. You can use the rest for variables.
|> 2. Do a simple scan of the intermediate code and allocate
|> the registers to the 10 most referenced locals.
Even simpler, use register declarations (weighted register
declarations would be best), or just assign the registers to the first
|> You can
|> keep a bit in the symbol table indicating that
|> a variable's address had been taken. Skip these.
That's a C-ism. Don't allow such things in the language you
|> Any other >simple< suggestions?
Well, most basic block optimizations are pretty simple.
First of all, I would do code selection using tree pattern matching
with burg or iburg (Davidson-Fraser is more general, but looks more
complicated to me). Or you can use an ad-hoc approach (depends on your
source and target languages).
The next thing to consider is instruction scheduling. A simple
algorithm is Griesemers direct placement, but list scheduling is not
too complex, either. If you compile to MIPS assembly, the assembler
will schedule the instructions for you.
You can allocate the intermediate values at the basic block level
instead of at the expression level (should not be too complex,
either). You now get to the phase ordering problem of scheduling and
register allocation. My advice (for fast code) is to do the scheduling
first and assign one or two registers more for intermediate
values. Schedule again after the register allocation. In any case, you
should perform code selection first.
So much for the code generation tasks, now for the optimizations:
Constant folding is easy, and common subexpression elimination is not
hard (you may have to spend another reg or two for intermediate
values, however). And as you write, on can avoid some redundant loads.
Finally, a few optimizations with larger scope might be worthwhile and
simple enough to implement. E.g., strength reduction in loops would be
important unless the source language offers pointer arithmetic. If the
source language has appropriate semantics (i.e., a FOR loop, where
changing the loop index and gotos are disallowed), this is quite
simple to implement for the most interesting cases (array accesses
(and other expressions), where the only variables are loop
indices). However, with such an optimization you get more complicated
register allocation; so maybe you should use a language with pointer
arithmetic after all.
Note, however, that this is a different line of development from
classic optimizing compilers, and the result will not be an
intermediate step on the road to optimizing compilers.
M. Anton Ertl
Return to the
Search the comp.compilers archives again.