# Re: Multiplication by constants

## chase@centerline.com (David Chase)Tue, 12 Sep 1995 18:22:46 GMT

From comp.compilers

Related articles
Multiplication by constants lederman@dstc.qut.edu.au (Jeff Ledermann) (1995-08-28)
Re: Multiplication by constants preston@tera.com (1995-09-05)
Re: Multiplication by constants chase@centerline.com (1995-09-12)
| List of all articles for this month |

 Newsgroups: comp.compilers From: chase@centerline.com (David Chase) Keywords: arithmetic Organization: CenterLine Software References: 95-09-018 95-09-066 Date: Tue, 12 Sep 1995 18:22:46 GMT

Jeff Ledermann <lederman@dstc.qut.edu.au> writes:
>I recently picked up Preston Briggs and Tim Harvey's excellent
>paper "Multiplication by Integer Constants"
>(ftp://cs.rice.edu/public/preston/optimizer/multiply.ps.gz).

preston@tera.com (Preston Briggs) writes:
> Everyone should understand that the little paper doesn't describe a
> new approach; instead, it's simply an implementation (with an attempt
> at an explanation) of Bernstein's approach
>
> title="Multiplication by Integer Constants",
> author="Robert L. Bernstein",
> journal=spe,
> volume=16,
> number=7,
> month=jul,
> year=1986,
> pages="641--652",
>
> >Their algorithm includes a neat memoizing feature so that once an
> >optimal factorization is found it never has to be calculated again
> >(for that run). Extending the persistence leads to the (probably
> >not terribly novel) approach of precalculating all factors (well up
> >to some limit anyway), using brute force to find optimal sequences.

> Perhaps David Chase will respond to this too.

Sure, I'll bite. I tinkered with this approach in an experiment
I tried years ago (why I tried this experiment, I do not recall,
but I did). Precalculation works pretty well -- you do get a
nice constant factor speed boost.

What I precalculated, however, was not a result, but merely whether
to avoid a particular (postive-to-negative) first step (*). This was
a misguided approach, actually, because of a defect in my analysis of
the problem, but it did make the defective search go faster (The
defective search did not find wrong answers, but sometimes it did not
find the best answer). However, you could similarly apply this to
whether you should use a simple (cheap) heuristic (non-factoring, or
only small-factoring) or a more expensive one. This reduces to
presence or absence in a table -- you could encode it with a gigantic
bit vector, for instance. Note that pre-memoizing the best answer
for each number up to N only requires remembering the method, shift,
and cost -- to do the factorization, you apply the method and shift
to get to your next constant, which in turn will have information
recorded for it. You should be able to store the information in
16 bits.

(*) There's a claim in Bernstein that a positive-to-negative first
step is never the best choice -- there should always be at least an
equal positive-to-positive step. This is not entirely true, but
only because of the joys of 2's complement arithmetic and fixed
word-size overflow. With a non-misguided approach, the exceptions
to Bernstein's rule are rare and unimportant. I only encountered
one, and that was by tedious construction, and it was not a small
number.

David Chase

--

Post a followup to this message