|[9 earlier articles]|
|Re: floating point firstname.lastname@example.org (Bruce Dawson) (1998-11-01)|
|Re: floating point darcy@usul.CS.Berkeley.EDU (1998-11-06)|
|Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-11-06)|
|Re: floating point email@example.com (Bruce Dawson) (1998-11-07)|
|Re: floating point firstname.lastname@example.org (1998-11-19)|
|Re: floating point email@example.com (David McQuillan) (1998-11-21)|
|Re: floating point darcy@CS.Berkeley.EDU (Joseph D. Darcy) (1998-12-01)|
|Floating Point firstname.lastname@example.org (Jim) (1999-11-02)|
|From:||"Joseph D. Darcy" <darcy@CS.Berkeley.EDU>|
|Date:||1 Dec 1998 02:46:13 -0500|
|References:||98-09-164 98-10-018 98-10-040 98-10-120 98-11-015 98-11-031 98-11-059 98-11-093|
Bruce Dawson <email@example.com> writes:
> >I'll have to
> >partially retract my statement about nobody being happy with the x87 -
> >it doesn't implement double precision as badly as I had feared, since
> >the only unavoidable problem if you set the rounding to double is the
> >exponent range - which will rarely matter.
firstname.lastname@example.org (Paul Eggert) wrote:
> Stick to your guns! The basic problem with x86 and strict `double' is
> that, even in 64-bit mode, the x86 doesn't round denormalized numbers
> properly. It simply rounds the mantissa at 53 bits, resulting in a
> double-rounding error. The proper behavior is to round at fewer bits.
The rounding used on the x86 is explicitly allowed by the IEEE 754
standard (section 4.3). The intention of the x86 design is to reduce
the occurrence of floating point exceptions and thereby generate the
correct numerical answer more often.
> I've seen claims of efficient workarounds, but whenever I see details,
> it's clear that the methods are either incorrect or inefficient.
Roger Golliver of Intel has developed a refinement of the store-reload
technique that is both correct and efficient, comparable to the
store-reload idiom that exhibits double rounding. Using a floating
point exception handling optimization, Golliver's technique implements
correct pure double rounding with a speed penalty of 2X to 4X. For
details, see the Java Grande documents
"Improving Java for Numerical Computation"
"Making Java Work for High-End Computing"
(The latter has a few formatting errors absent from the former.)
> Most people don't care about the errors,
Such discrepancies occur very rarely in practice and are quite
unlikely to break a practical program.
> though, which is why the Java spec is being relaxed to allow
> x86-like behavior (and PowerPC multiply-add, too). For the vast
> majority of floating point applications, performance is more
> important than bit-for-bit compatibility, so it's easy to see why
> bit-for-bit compatibility is falling by the wayside.
The new JVM spec uses a bit in a method's descriptor to indicate which
of two floating point semantics the method uses:
1. strict Java 1.0 floating point for bit-for-bit reproducibility
2. to improve performance, in some contexts float and double values
are allowed to have extended exponents
Existing class files will have the latter semantics.
Return to the
Search the comp.compilers archives again.