|[3 earlier articles]|
|Re: Floating point constant question email@example.com (1994-03-29)|
|Re: Floating point constant question firstname.lastname@example.org (1994-03-29)|
|Re: Floating point constant question email@example.com (1994-03-30)|
|Re: Floating point constant question firstname.lastname@example.org (1994-03-30)|
|Re: Floating point constant question chase@Think.COM (1994-03-30)|
|Re: Floating point constant question email@example.com (1994-03-31)|
|Re: Floating point constant question firstname.lastname@example.org (1994-03-31)|
|From:||email@example.com (Przemek Klosowski)|
|Organization:||U. of Maryland/NIST|
|Date:||Thu, 31 Mar 1994 20:21:13 GMT|
chase@Think.COM (David Chase) writes:
In general, I think this trend is nuts (yes, I'm aware that reordering can
occur in Fortran as long as it does not disobey parentheses). I can
tolerate accuracy-enhancing optimizations (such as use of fused-madd, or
replacing div-imprecise with mult-precise-reciprocal) under the control of
a flag, but if you are trying to ensure that your application will exhibit
no bugs in the field, then you do not want to monkey with its behavior in
any way, even if you are making it "better".
In addition, verification of a compiler is made more difficult by these
sorts of things. No longer is there a single right answer -- now there is
a range of correct answers. Testing (to the same degree of confidence)
becomes much more expensive. This is especially true if you put other
behavior-"improving" optimizations under the control of the "-O11" flag.
On the other hand, if the result CAN depend on reordering, then perhaps it
is not well-defined in numerical sense. The thing that is lacking is not
some special ordering giving a 'blessed' result, but rather an error bound
on the result.
Interval arithmetic is one of the methods for obtaining such error
estimate; I understand that it is currently impractical for production
numerical work, because of speed penalty and because the intervals grow up
pretty fast. At the same time, I propose that for the purposes of compiler
certification one could consider a program written using interval
arithmetic algorithm and compare the intervals rather than the raw numbers.
Actually, since I am responding to firstname.lastname@example.org, I might ask you
whether massively parallel processors couldn't somehow make the interval
arithmetic more palatable? after all, the interval computation should just
require at most double the number of processors.
I had this idea after hearing a talk by Jack Dongarra, who complained that
most parallel algorithms no longer have nice predictable error bounds
provided by single-threaded algorightms, so that some effort to make error
estimates will be needed anyway.
przemek klosowski (email@example.com)
Reactor Division (bldg. 235), E111
National Institute of Standards and Technology
Gaithersburg, MD 20899, USA
(301) 975 6249
Return to the
Search the comp.compilers archives again.