Re: Best, Simple versus Best

ryg@summit.novell.com (Robert Geva)
Fri, 7 Apr 1995 12:02:07 GMT

          From comp.compilers

Related articles
[5 earlier articles]
Re: Best, Simple versus Best oz@nexus.yorku.ca (1995-03-16)
Re: Best, Simple versus Best sdm7g@elvis.med.virginia.edu (Steven D. Majewski) (1995-03-20)
Re: Best, Simple versus Best leichter@zodiac.rutgers.edu (1995-03-21)
Re: Best, Simple versus Best csabhv@upe.ac.za (Prof Herman Venter) (1995-03-30)
Best, Simple versus Best preston@tera.com (1995-03-30)
Re: Best, Simple versus Best ryg@summit.novell.com (1995-04-15)
Re: Best, Simple versus Best ryg@summit.novell.com (1995-04-07)
| List of all articles for this month |

Newsgroups: comp.compilers
From: ryg@summit.novell.com (Robert Geva)
Keywords: optimize
Organization: Compilers Central
References: 95-03-124
Date: Fri, 7 Apr 1995 12:02:07 GMT

    [Preston Briggs wrote]
> OK, here's a specific example. I don't see the words "strength
> reduction" in the paper, but there's one paragraph that mentions
> "loop induction analysis." Now, it doesn't explicitely say that
> nothing except FOR-loop indices are considered, but he does say that
> only limited classes of FOR-loop indices are considered. After
> reading it, I come way with the (perhaps wrong) idea that he only does
> strength reduction on FOR-loop indices where all the uses are
> multiplied by the same constant value.
>
> So, if you address 2 arrays of different-sized structures, or 2 arrays
> of different shape, or look at an array in a WHILE loop (imagine
> you're writing something exotic like quicksort), then he won't do
> strength reduction. Doesn't seem very general to me. Seems like
> he'll soon have users trained to use FOR loops in unnatural places.


I agree that while loops should be optimized as well as for loops.
However, the case of strength reduction when the loop accesses structs of
different sizes is problematic. It may be more general to perform the
strength reduction on these loops, but will it improve performance?
It is very likely to introduce temps, and the overhead will cost more
than the expected performance gain. The tradeoff is, IMHO, target specific.
For one, it depends on the available addressing modes. For the Pentium,
I voted against the more general implementation in favor of faster executables.


The general approach may be more elegant, but it is not necessarily better.
For one, recognizing a larger set of cases as opprtunities for an optimization
is sometimes the easier way. It may indicate that your costing function is
not tuned fine enough. It is a strange goal to implement optimization as
general as possible. The goal of optimizations is to generate faster code.
It someties means that you optimize fewer cases.




Custom made optimizations
Novell, Unix systems group
(908) 522 6078
ryg@summit.novell.com
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.