Re: Best Ref-counting algorithms?

George Neuner <gneuner2@comcast.net>
Thu, 30 Jul 2009 00:28:38 -0400

          From comp.compilers

Related articles
[30 earlier articles]
Re: Best Ref-counting algorithms? gah@ugcs.caltech.edu (glen herrmannsfeldt) (2009-07-18)
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-18)
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-21)
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-22)
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-25)
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-07-27)
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-07-30)
Re: Best Ref-counting algorithms? lerno@dragonascendant.com (=?ISO-8859-1?Q?Christoffer_Lern=F6?=) (2009-08-02)
Re: Best Ref-counting algorithms? gneuner2@comcast.net (George Neuner) (2009-08-03)
Re: Best Ref-counting algorithms? DrDiettrich1@aol.com (Hans-Peter Diettrich) (2009-08-07)
| List of all articles for this month |

From: George Neuner <gneuner2@comcast.net>
Newsgroups: comp.compilers
Date: Thu, 30 Jul 2009 00:28:38 -0400
Organization: A noiseless patient Spider
References: 09-07-018 09-07-039 09-07-043 09-07-054 09-07-060 09-07-066 09-07-071 09-07-073 09-07-082 09-07-090 09-07-101
Keywords: GC
Posted-Date: 30 Jul 2009 23:13:52 EDT

On Mon, 27 Jul 2009 11:18:56 -0700 (PDT), Christoffer Lernv
<lerno@dragonascendant.com> wrote:


>On Jul 25, 8:46 am, George Neuner <gneun...@comcast.net> wrote:
>> On Wed, 22 Jul 2009 02:31:06 -0700 (PDT), Christoffer Lernv
>> <le...@dragonascendant.com> wrote:
>
>From a performance perspective (I mentioned my wish for predictive GC
>performance before), would it make sense to enable programmer
>controlled regions rather than pure compiler directed ones?


Programmer controlled regions = manual memory management. There is
nothing wrong with it (and Mark-Release regions are an extremely
efficient way to do it), but it cannot be construed to be "automatic"
or "GC" any more than is deleting a member array in a C++ object's
destructor.




>> Escape analysis is purely an issue of lexical scoping and whether the
>> object may outlive the scope in which it is defined. What form of
>> typing the language uses is not relevant ... you are only looking for
>> the object to leave the control of the scope chain.
>
>In the case of an OO language with dynamic dispatch, then if I'm not
>mistaken it's not really possible to tell what happens to any of the
>arguments of a method invocation (including the object itself) at
>compile time.
>
>It's only post-fact (when returning from the current scope) one is in
>a position to determine is the object will outlive the current scope
>or not.


Even though the control path is not determined until run time, it is
still statically specified (even in Lisp where you can add/subtract
methods at run time). Although you don't know which control path will
be taken, you still know that (depending on what the language allows)
only actual OO objects, local structures or pointers to them can
possibly escape. The analysis considers the set of potential escapees
passed to the static method call. The actual types involved don't
matter, the analysis is only concerned with names.


If you can do whole program or compilation unit analysis, you may find
through call chain analysis that escape is impossible even though
escape analysis says it's possible. Some functional languages do this
(under high optimization settings) to minimize unnecessary heap
allocation.


Some functional languages do such analysis (under high optimization
settings) to prevent needless heap allocation.




>That's why I suggested allocating everything on the stack, as I
>suppose that most allocation could safely be done on the stack
>(assuming constructor allocation being inlined to the parent scope).


The problem with stack allocating first and then copying is that the
copy must be made *before* the pointer potentially escapes. This is
because once the object leaves the scope chain, anything might happen
to it - including hidden access by an asynchronous mutator.


You also have to consider the nature of the objects such as whether a
shallow copy is sufficient or a deep copy is necessary and whether you
can actually perform the needed operations automatically.


Beyond that, once the heap copy exists the stack copy logically dies,
which means that the local code must only reference the heap copy from
that point on.


The escape analysis tells you indirectly which objects are safe to
stack allocate by telling you which objects must be heap allocated to
be safe. Escape analysis is conservative in that it doesn't know
whether the object does escape ... it only can tell that it might.




>Am I missing something that could enable me to do proper escape
>analysis and put allocations on the stack?


I think at this point you understand the actual analysis and the
potential consequences adequately. What you need to consider now is
how using it impacts your design.


George



Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.