|Bison, memory leaking firstname.lastname@example.org (Dennis Bj|rklund) (1997-05-25)|
|Re: Bison, memory leaking email@example.com (Charles Fiterman) (1997-05-27)|
|Re: Bison, memory leaking firstname.lastname@example.org (Paul David Fox) (1997-05-27)|
|Re: Bison, memory leaking email@example.com (1997-06-02)|
|Re: Bison, memory leaking firstname.lastname@example.org (1997-06-04)|
|From:||email@example.com (Dennis Bjorklund)|
|Date:||2 Jun 1997 10:29:37 -0400|
>> There is a problem with bison. When bison tries to fix a parse error
>> then it discards semantic values on the valuestack and it discards
>> tokens that it gets from yylex(), until it reaches a state that it can
>> reduce in.
>> [This is a well-known yacc problem. I usually solve it by chaining the
>> malloc'ed data together and then releasing it all either at the end of
>> the parse or in the top-level statement-list type rule. -John]
Paul David Fox <firstname.lastname@example.org> writes:
>I can second that (John's reply). For a large parse-tree, the overhead
>of malloc() can be extremely high. Not only that but you can
>half your performance by trying to free the memory afterwards.
>You can gain oodles of speedups by having your own fixed-size mallocator
>and freeing things in huge chunks. Not only that - this effectively
>gives you a form of garbage collection for free.
But, speed is not the problem. You can still use your own malloc fi
I think my solution is very neat. And that these two solutions do not
compare with each other. My way only kicks in when bison is in error
recovery mode. And that's the only time there can be
memoryleaking. And since I have C++ objects, the destructors are
called correctly and easy. If we could get this small change in
bison, then all you have to do is to add a function:
void yydsicard( int symbol, YYLVAL *lval )
switch( symbol )
case Tidentifier : // T = Terminal
delete  lval->string;
case NTlist : // NT = NonTerminal
delete_list( lval->list );
Before I changed bison I had a linked list with all things that were
created and together with each object I had a pointer to the right
destructor to call. Then we get lots of overhead that we don't need: a
list of objects, problem with destructors, extra overhead when freeing
memory (going through the linked list).
I gained speed by doing it this way.
Return to the
Search the comp.compilers archives again.