|Integers on 64-bit machines email@example.com (Denis Washington) (2007-07-02)|
|Re: Integers on 64-bit machines firstname.lastname@example.org (2007-07-04)|
|Re: Integers on 64-bit machines email@example.com (Marco van de Voort) (2007-07-04)|
|Re: Integers on 64-bit machines firstname.lastname@example.org (Amit Gupta) (2007-07-05)|
|Re: Integers on 64-bit machines DrDiettrich1@aol.com (Hans-Peter Diettrich) (2007-07-05)|
|Re: Integers on 64-bit machines email@example.com (2007-07-05)|
|Re: Integers on 64-bit machines firstname.lastname@example.org (Dmitry A. Kazakov) (2007-07-05)|
|Re: Integers on 64-bit machines email@example.com (glen herrmannsfeldt) (2007-07-05)|
|[20 later articles]|
|From:||firstname.lastname@example.org (Torben =?iso-8859-1?Q?=C6gidius?= Mogensen)|
|Date:||Wed, 04 Jul 2007 11:30:40 +0200|
|Organization:||Department of Computer Science, University of Copenhagen|
|Keywords:||arithmetic, design, comment|
|Posted-Date:||04 Jul 2007 20:37:10 EDT|
Denis Washington <email@example.com> writes:
> I'm currently developing a little C-like programming language as a
> hobby project. After having implemented the basic integral integer
> types like known from Java/C# (with fixed sizes for each type), I
> thought a bit about 64-bit machines and wanted to ask: if you develop
> on a 64-bit machine, would it be preferable to still leave the
> standard integer type ("int") 32-bit, or would it be better to have
> "int" grow to 64 bit? In this case, I could have an
> architecture-dependent "int" type along with fixed-sized types like
> "int8", "int16", "int32" etc.
> What do you think?
> [I would make my int type the natural word size of the machine. If people
> want a particular size, they can certainly say so. -John]
I never really liked C's machine-dependent integer type. I prefer
integer types to have explicit fixed sizes (and a selection of those)
or be unbounded. However, I'm happy to allow the implementation to
use more bits than required, so an int16 could be implemented as a
32-bit integer on machines where operating on 16-bit entities is
difficult or costly.
Even better than a small fixed number of sizes (such as int8, int16,
int32 and int64) is to (like in Pascal) explicitly state the required
minimum and maximum values, so you have types like -10..10 or 0..255.
You would be guaranteed that all values in the interval would be
representable. Ideally (as in Pascal), you would get errors if you
put a larger value into a variable than its type support, but if you
are worried about performance, it would be acceptable to drop these
tests. Many of them could be eliminated at compile-time, though, as
as index checks.
In addition to explicitly bounded numbers, you could have an integer
type that is only bounded by the available memory to store it. If you
just add a machine-dependent bounded integer type (as Pascal), people
will tend to use it instead of the bounded type and just make tacit
assumptions about the range of values.
[PL/I let you specify how big all your integers needed to be, and I
can't say that part was a rousing success. -John]
Return to the
Search the comp.compilers archives again.