|Re: 'Superoptimizers' email@example.com (1995-11-09)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-14)|
|Re: 'Superoptimizers' email@example.com (1995-11-15)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-17)|
|Re: 'Superoptimizers' email@example.com (1995-11-20)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-21)|
|Re: 'Superoptimizers' email@example.com (1995-11-22)|
|Re: 'Superoptimizers' firstname.lastname@example.org (1995-11-23)|
|Re: 'Superoptimizers' email@example.com (1995-11-27)|
|[1 later articles]|
|From:||firstname.lastname@example.org (Bill Leonard)|
|Organization:||Harris Computer Systems, Ft. Lauderdale FL|
|Date:||Wed, 15 Nov 1995 19:04:42 GMT|
email@example.com (Andy Glew) writes:
> I hope that Intel's compiler team - who do a great job - doesn't mind
> if I say that I, as an architect, sometimes wish that they spent less
> time worrying about compile time, and more about getting even better
> SPEC numbers.
> >Do there exist 'release' compilers which will, given the right
> >optimisation parameters, spent a week on a P6 compiling a program
> >but produce noticably better code than normal, 'development'
> Dammit, I wish that we had such a compiler!!!! I would gladly spend a
> week optimizing to get better release numbers.
Trouble is, the time taken in most optimizers is not in performing the
optimizations, but in looking for the opportunities. Most optimizations
involve solving a set of data flow equations -- the solution will tell you
what optimizations you can do. The solution time depends mostly on the
size and nature of the input program, not on the end result. So you can
spend a lot of time solving the equations only to find there are no (or
few) opportunities for optimization.
If someone wants a topic for research, here it is: Come up with a fast (and
reliable) predictor of optimizations.
> Now, admittedly, there are guys who spend days figuring out the best
> combinations of compiler switches - but each compilation is "human
> scale". I know of several optimizations that were scaled back because
> they took too long.
Yes, I'd say this is pretty common. There are many optimizations one can
do that take a long time to discover but which actually occur rarely.
Another tradeoff commonly made is whether to build a really complicated
algorithm that will find every last opportunity, or a simpler algorithm
that gets 80% or 90% of the cases.
> In fact, one of the most difficult to detect problems in working
> with a compiler is recognizing when an arbitrary limit, like "turn off
> basic block optimizations if there are more than 2000 basic blocks in
> a procedure", has been exceeded - especially when the limit is not
> necessarily available as a command line switch. And when the limit
> was provided to limit compile time on some old processor, and now
> should be considerably scaled up.
That's a good point, but there's another side to this issue (of course).
Did your customer buy a faster machine so that his compiles would take the
same amount of time? Probably not. So you have to try to decide how much
to scale up so that (a) compilations on the new processor are still faster
than on the old, and (b) you get the benefit of increased optimization.
Good luck with that one! :-)
> (b) Used "compilation time" as a command line parameter. I.e. I wish
> I could say "take as long as 30 minutes to optimize, but not much
> longer" - and have the compiler quickly produce a quick and dirty
> code, and then refine it a few times by applying more and more
Yep, that would be nice. But of course it's pretty hard.
For instance, your optimizer is going to spend a while crunching on some
intermediate form to optimize it. Then your register allocator is going to
crunch on it some more to assign registers. Both are complex algorithms
that are difficult to predict, in advance, the time required. So how do
you decide when to cut off the optimizer, given that you don't know how
long the register allocator is going to need?
And then there's the instruction scheduler, which is probably also fairly
complex and difficult to predict the time requirements in advance
(especially in advance of generating the code!).
This is not to say that Mr. Glew's observations are not pertinent, only
that there are good reasons why this hasn't been done yet.
Harris Computer Systems Corporation
2101 W. Cypress Creek Road
Fort Lauderdale, FL 33309
Return to the
Search the comp.compilers archives again.