|[9 earlier articles]|
|Re: Backtracking yacc firstname.lastname@example.org (Tom Harwood) (1992-09-18)|
|Re: Backtracking yacc email@example.com (1992-09-19)|
|Re: Backtracking yacc firstname.lastname@example.org (1992-09-21)|
|Re: Backtracking yacc email@example.com (1992-09-21)|
|Re: Backtracking yacc firstname.lastname@example.org (1992-09-23)|
|Re: Backtracking yacc email@example.com (1992-09-23)|
|Re: Backtracking yacc firstname.lastname@example.org (1992-09-25)|
|Re: Backtracking yacc email@example.com (1992-09-25)|
|Re: Backtracking yacc firstname.lastname@example.org (1992-09-25)|
|Parsers are easy to write (was: Re: Backtracking yacc) email@example.com (1992-09-25)|
|From:||firstname.lastname@example.org (Andrew Dunstan)|
|Organization:||The University of Adelaide|
|Date:||Fri, 25 Sep 1992 07:52:18 GMT|
|Keywords:||parse, LL(1), errors|
|> LALR(1) is powerful enough for most purposes. (In fact I
|> am more taken with LL(1), for reasons I won't go into here.)
email@example.com (Gary Merrill) wrote:
|> Wait a minute! Please *do* go into these reasons here. I think they
|> might very well be of general interest. I to am more "taken" will LL(1)
|> grammars. They seem more "natural" to me. I think I could make this
|> more precise, but I am quite interested in hearing the feelings of others.
OK, twisted my arm, you smooth-talking bastard. -:) Several correspondents
have suggested I address this. One of the things I am interested in is
error recovery. Left pasring has a big advantage here, it seems to me,
since they hold far more information about where they are going. The stack
of a deterministic left parser contains the single viable suffix of the
sentential form that is begun by what is parsed, while a right parser
knows at best a set of possible continuations. The error recovery scheme I
am working on at the moment works by changing the parser's prediction
stack in certain circumstances. Of course, this means it must be a left
parser, since right parsers are not predictive, and it must be
table-driven instead of recursive, since I was not prepared even to
contemplate manipulating the run-time stack.
I noted with interest that one poster said that error recovery in a
back-tracking parser was horrific, and that it was hard in ANY right
parser. I agree with him (I think it was a him :-) )
There are other reasons - LL parsers are easy to hand craft, making them
good for teaching purposes (yes, I know we all use calculators, but we
also have to know how to do arithmetic without them.) They can also be
very fast. Semantic actions can be embedded in the grammar in a way that
seems to me more natural than is the case in right parsers. Semantic
records don't need to be carried along in the parser's stack. LL parsers
of good quality are also very fast.
I don't agree with Gary that writing LL grammars is more "natural" than
writing LR grammars. But I certainly find making them do semantic actions
is more natural, and that seems to me to be more important.
The silly thing about parser wars is that they concentrate on the wrong
thing. Parsing correct input is easy. It's also elegant and fun. But the
places where we need most work done are the hard parts - things like error
recovery and good code generation. Parsers should be chosen according to
their ability to help us do these hard things, not the easy bits.
firstname.lastname@example.org (Joachim Schrod) writes
|> Has anybody of the persons involved in this discussion actually tried
|> PCCTS (the Purdue Compiler Construction ToolSet)? It produces parsers for
|> LL(k) grammars.
I've got it, but haven't had time to look at it except briefly. It's
high on my list of priorities.
# Andrew Dunstan
# Department of Computer Science
# University of Adelaide
# South Australia
# net: email@example.com
Return to the
Search the comp.compilers archives again.