Re: speeding up compile times in C++

shankar@sgi.com (Shankar Unni)
Wed, 8 Feb 1995 22:48:12 GMT

          From comp.compilers

Related articles
[3 earlier articles]
Re: speeding up compile times in C++ shankar@sgi.com (1995-02-02)
Re: speeding up compile times in C++ glasss@ncp.gpt.co.uk (Steve Glass) (1995-02-03)
Re: speeding up compile times in C++ thutt@clark.net (1995-02-04)
Re: speeding up compile times in C++ rfg@rahul.net (Ronald F. Guilmette) (1995-02-04)
Re: speeding up compile times in C++ imp@boulder.openware.com (1995-02-06)
Re: speeding up compile times in C++ green@vizbiz.com (Anthony T. Green) (1995-02-06)
Re: speeding up compile times in C++ shankar@sgi.com (1995-02-08)
Re: speeding up compile times in C++ tmb@netcom.com (1995-02-12)
| List of all articles for this month |

Newsgroups: comp.compilers
From: shankar@sgi.com (Shankar Unni)
Keywords: C++, performance
Organization: Silicon Graphics, Inc.
References: 95-02-012 95-02-050
Date: Wed, 8 Feb 1995 22:48:12 GMT

Steve Glass (glasss@ncp.gpt.co.uk) wrote:


> Well, our project is nearly 1 million lines of C++. We have thousands of
> classes but only use `standard' C++ techniques (not including headers
> whenever possible, using forward declarations etc.) and do not use
> `clever' declaration techniques such as opaque types.


Ah, this starts to defeat the precompiled header approach. Having worked on
an implementation, I found that one could go about doing this in one of two
different ways, each with its drawbacks:


  - one precompiled header per source header. This causes enormous
      headaches, especially with inter-header dependencies (macros are a
      *BEAR*, as are redeclarations of tags, since most parsers simply swallow
      them silently leaving no trace behind).


  - one precompiled header for a "common leading set" of header files in a
      compilation unit. As long as two compilation units share the same
      leading set of header files, and are compiled with the same options,
      they can share a precompiled header file.


      Since we are rarely so disciplined when doing precompiled headers, this
      leads to lots of disk being consumed by precompiled headers.


Doing the sorts of "standard C++ techniques" described by Steve above
shoots the second approach right out of the water, and also causes problems
for the first approach.


> Re: the speed of Borland C++'s precompiled headers..


While I'm not very familiar with that implementation, I must say that to
get *any* substantial benefit from precompiled headers, we must be ready to
cooperate with the compiler in trying to achieve performance.


Paradoxically, it really helps, if you're using precompiled headers, to
have a "#include "common.h"" which just includes just about every header
you're interested in.


On Unix systems, at least, good implementations simply mmap() in a complete
symbol table for precompiled headers; the speedup is absolutely tremendous,
and having a common precompiled header (and using common options) can give
you relatively huge gains in compile-time performance (like slash it by
half or more, even for a full recompile).


> Rather than separately compile individual files and then link into a
> library or program we used the preprocessor. We #include all the sources
> for a library into one source file and compiled that as a whole.


And now we are segueing into the idea of a SmallTalk-ish environment, where
we store all the "sources" (no concept of "headers" and "files", really)
into a common environment, and use that to generate code.


This approach has benefits and drawbacks, too, but I believe this is the
ultimate direction we must head towards.


The current drawbacks are that


  - It is difficult for a team to work on a setup like this together (though
      not impossible).


  - Revision control is a problem. I'm not sure what work has been done in
      the area of revision control in such an environment (given the complete
      absence of the concept of source files). (Perhaps a per-declaration
      source control, with a notion of a "group" of changes which can be
      committed and/or backed out together?).


  - Environments using databases to do this tend to be slow and bloated
      today, even with the fastest databases around.


(SmallTalk used to avoid all these problems by simply ignoring them. Is
there any work on any of this in smalltalk implementations?)


--
Shankar Unni E-Mail: shankar@sgi.com
Silicon Graphics Inc. Phone: +1-415-390-2072
--


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.