Re: Why is using single-precision slower than using double-precision

tgl@netcom.com (Tom Lane)
Wed, 30 Nov 1994 06:24:18 GMT

          From comp.compilers

Related articles
[9 earlier articles]
Re: Why is using single-precision slower than using double-precision dsmentek@hpfcla.fc.hp.com (1994-11-23)
Re: Why is using single-precision slower than using double-precision trobey@taos.arc.unm.edu (1994-11-23)
Re: Why is using single-precision slower than using double-precision kenneta@hubcap.clemson.edu (1994-11-23)
Re: Why is using single-precision slower than using double-precision dik@cwi.nl (1994-11-24)
Re: Why is using single-precision slower than using double-precision davidc@panix.com (David B. Chorlian) (1994-11-24)
Re: Why is using single-precision slower than using double-precision roedy@BIX.com (1994-11-30)
Re: Why is using single-precision slower than using double-precision tgl@netcom.com (1994-11-30)
Re: Why is using single-precision slower than using double-precision hebert@prism.uvsq.fr (1994-11-24)
Re: Why is using single-precision slower than using double-precision dekker@dutiag.twi.tudelft.nl (Rene Dekker) (1994-11-30)
Re: Why is using single-precision slower than using double-precision meissner@osf.org (1994-11-24)
| List of all articles for this month |

Newsgroups: comp.parallel,comp.arch,comp.compilers
From: tgl@netcom.com (Tom Lane)
Keywords: C, arithmetic
Organization: Netcom Online Communications Services
References: <3aqv5k$e27@monalisa.usc.edu>
Date: Wed, 30 Nov 1994 06:24:18 GMT

I can't believe that any of the dozen previous responders haven't gotten this
right...


1. In K&R C, all floating point arithmetic is mandated to occur in double
precision. The compiler must convert float inputs to double and results back
to float when you use single-precision float variables. Thus the float case
is pretty well certain to be slower.


2. The ANSI C standard changed this aspect of the language. Operations
between two floats are now mandated (or perhaps only allowed? not sure)
to be done in single precision. In this case one expects to see the
performance difference of the two precisions in the underlying hardware.


As several previous posters did point out, the difference between single and
double precision arithmetic speed varies wildly across hardware platforms.
But first, you need to find out which generation of C your compiler is
implementing.


regards, tom lane


PS: and don't forget that floating-point constants are double, unless you
add 'F' to make them single. The compiler will always do an operation
involving a single and a double in double precision...
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.