Re: predicate parsing (Dave Schaumann)
Thu, 22 Apr 1993 17:52:36 GMT

          From comp.compilers

Related articles
Re: A C++ Parser toolkit (1993-04-12)
predicate parsing (Ariel Meir Tamches) (1993-04-21)
Re: predicate parsing (1993-04-22)
predicate parsing (Stephen J Bevan) (1993-04-22)
Re: predicate parsing (1993-04-23)
Re: predicate parsing (1993-04-23)
Re: predicate parsing (1993-04-27)
Re: predicate parsing (1993-04-28)
predicate parsing (Trevor Jenkins) (1993-04-28)
[3 later articles]
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Dave Schaumann)
Keywords: parse, tools
Organization: University of Arizona
References: 93-04-044 93-04-077
Date: Thu, 22 Apr 1993 17:52:36 GMT

tamches@wam (Ariel Meir Tamches) writes:
>It's hard to imagine how one can insert arbitrary predicates in a
>bottom-up parser;

LR parsing is based on running a dfa which recognizes handles of the
grammar, and issues the appropriate shift, reduce, or accept action on
each iteration.

It seems to me that we could augment this model to include test-reduce
actions to the PDA. For instance, when you generate the parse tables for
something like this:

var_name : identifier
type_name : identifier

a reduce/reduce ambiguity is introduced -- when the parser sees an
identifier, it doesn't know whether to reduce on the type_name rule or the
var_name rule. This is where the test-reduce action would come in. The
easiest way to implement this would be to associate a predicate with each
ambiguous production:

var_name : is_var_name ( identifier )
type_name : is_type_name( identifier )

Then, when the reduce-test action is encountered, code could be executed
to test the predicate of each choice in turn, until one succeeded, and
then reducing on the associated rule. Notice that we only need to use
this in the case of ambiguous rules; other rules can be recognized with
the usual reduce action.

Of course, faster and more elegant solutions are possible, but I think
this demonstrates predicates can be practically implemented in an LR

>[...]top-down is the way to go. People used to (and maybe still do) laugh
>at LL parsers; after all, LR(k) is a proper superset of LL(k). But when
>we consider how easy it is to add predicates and inherited attributes to
>LL, we see that "conventional wisdom" has been wrong.

Of course, LL is fine if you can do without left recursion, and you can
always determine what rule to choose next based on FIRST sets. In a few
cases, this is not a problem. In other cases, it forces you to mutilate
your grammar to satisfy the needs of the parser. Certainly, you can use
the well-known algorithms to do left-factoring, and left-recursion
removal. But you are then forced to use a grammar that is a step removed
from your original choice for each left-factoring, and for each
left-recursion you must remove.

>From another angle, consider hacks used by Yacc to get complex
>(non-LALR(1)) grammars to parse: I'm still trying to decipher the tricks
>and hacks used in GNU C++ and Jim Roskind's Yacc grammars.

I think that the problems of the various C++ grammars belong to C++ far
more than they belong to Yacc or LR parsing.
Dave Schaumann

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.