LR(0) vs. LALR, and the Great Parsing War (Jonathan Eifrig)
Sun, 30 Aug 1992 02:40:50 GMT

          From comp.compilers

Related articles
LR(0) vs. LALR, and the Great Parsing War (1992-08-30)
Re: LR(0) vs. LALR, and the Great Parsing War (1992-08-31)
Re: LR(0) vs. LALR, and the Great Parsing War (1992-09-02)
Re: LR(0) vs. LALR, and the Great Parsing War (1992-09-05)
| List of all articles for this month |

Newsgroups: comp.compilers
From: (Jonathan Eifrig)
Organization: Johns Hopkins Computer Science Department, Baltimore, MD
Date: Sun, 30 Aug 1992 02:40:50 GMT
Keywords: parse, bibliography

It being near the start of the new school year and all, it seems
appropriate to start the (N+1)th Annual Comp.Compilers Parsing Argument.
Let's see if we can finish it up before Thanksgiving this year! :-)

Seriously, I came across a _hint_ of a surprising result that I
was hoping someone could either confirm or refute:

In a paper in this year's ACM SIGPLAN Workshop on ML and its
Applications, Pettersson and Fritzson [1] claim that using Tomita's
pseudo-parallel LR(0) parsing scheme [2] "in practice" only takes about
10% longer than using an SLR or LALR parser. They reference a result by
Lankhorst [3] as justification for this claim, which I simply _cannot_ get
my hands on. (Read the reference; you'll understand my difficulty.)

This seems quite surprising to me, given that Tomita's algorithm
basically has to spin off on the fly new parsing automatons to follow each
possible path in a derivation. Now, while I can see that this might be
true if you're implementing in a language with a nice GC-ed heap like
Lisp, it's not so clear if you have to keep cloning stacks all the time.

Can someone who's read the paper post or mail me some sort of


@string{mlwork92 = "Proceedings of the 1992 ACM SIGPLAN Workshop on ML and
Its Applications"}
@string{ijcai85 = "Proceedings of the 1985 International Joint Conference
on Artificial Intelligence"}

author = "Mikael Pettersson and Peter Fritzon",
title = "A General and Practical Approach to Concrete Syntax Objects
within ML",
booktitle = mlwork92,
year = 1992,
pages = "17--27",
abstract = "In this paper we present an approach to concrete syntax
objects within ML, which is both general and efficiently implementable.
These language enhancements add BNF rules for abstract syntax declarations
and ``{\em semantic brackets}'' with inline concrete syntax and pattern
matching for readability and conciseness. This approach has several
improvements integrated together which either do not appear in previous
works, or appear in forms which are very restrictive or have very
inefficient implementations. Our improvements are: (1) inline concrete
syntax within ``semantic brackets'' has been integrated bot with the ML
type system and the ML scope rules, (2) concrete syntax can appear both as
{\em syntactic patterns} for pattern matching and as {\em syntactic
expressions} for building objects, (3) patterns can be nested to arbitrary
depth, (4) concrete syntax and ML objects can be mixed; so-called {\em
anti-quotations} are supported directly, (5) patterns and parts of
patterns can be augmented with type information, (6) efficient integration
with a general incremental LR(1) parsing mechanism.
These extentions have been efficiently implemented within our DML
system. DML, the Denotational Meta-Language, is a dialect of Standard ML
with extentions aimed at making it (even more) useful for implementing
denotational specifications of programming languages." }

author = "Masaru Tomita",
title = "An Efficient Context-Free Parsing Algorithm for Natural Languages",
booktitle = ijcai85,
year = 1985

author = "M. M. Lankhorst",
title = "An Empirical Comparison of Generalized LR Tables",
booktitle = "Proceedings of the Workshop on Tomita's Algorithm - Extensions
and Applications",
year = 1991
Jack Eifrig ( The Johns Hopkins University, C.S. Dept.

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.