Re: How change grammar to equivalent LL(1) ?

Kaz Kylheku <>
Fri, 24 Apr 2020 18:13:58 +0000 (UTC)

          From comp.compilers

Related articles
How change grammar to equivalent LL(1) ? (Andy) (2019-12-22)
Re: How change grammar to equivalent LL(1) ? (Lasse =?iso-8859-1?q?Hiller=F8e?= Petersen) (2019-12-23)
Re: How change grammar to equivalent LL(1) ? (Christopher F Clark) (2019-12-23)
Re: How change grammar to equivalent LL(1) ? (Hans-Peter Diettrich) (2019-12-23)
Re: How change grammar to equivalent LL(1) ? (Lasse =?iso-8859-1?q?Hiller=F8e?= Petersen) (2020-04-24)
Re: How change grammar to equivalent LL(1) ? (Kaz Kylheku) (2020-04-24)
Re: How change grammar to equivalent LL(1) ? (silas poulson) (2020-11-11)
| List of all articles for this month |

From: Kaz Kylheku <>
Newsgroups: comp.compilers
Date: Fri, 24 Apr 2020 18:13:58 +0000 (UTC)
Organization: NNTP Server
References: 19-12-023 20-04-009
Injection-Info:; posting-host=""; logging-data="80465"; mail-complaints-to=""
Keywords: parse
Posted-Date: 24 Apr 2020 14:42:15 EDT

On 2020-04-24, Lasse Hillerĝe Petersen <> wrote:
> I know this is a very late reply, however, I sometimes forget to read
> Usenet news for a while, I hope the moderator is forgiving.
> On Mon, 23 Dec 2019 05:57:50 -0500, Christopher F Clark wrote:
>> Just a slight comment on what Lasse Hillerĝe Petersen
>> <> wrote:
>> is called left-factoring.
> I am aware, however the point was having the refactored action return a
> function to adjust the direction of the parse tree. I am sure LISPers and
> Schemers wouldn't consider this anything special (so ordinary perhaps
> even, that I hadn't been able to find any written mention of it, until
> today), but when I wrote it back in 2017 I looked at my code and thought
> "hey, that's actually neat and general."
> Only today did I actually manage to find a paper, which, although I am
> very rusty in the matter of formal proofs and theory, being just an
> amateur hacker, to me reads like the theory behind "my" method:
> Thielecke, Hayo. (2012). Functional semantics of parsing actions, and
> left recursion elimination as continuation passing. PPDP'12 - Proceedings
> of the 2012 ACM SIGPLAN Principles and Practice of Declarative
> Programming. 91-102. 10.1145/2370776.2370789.

Both left and right recursion elimination are related to tail
optimization and continuation passing.

In a shift-reduce LALR(1) parser, right recursion consumes parser stack
space. If you can refactor it to left recursion, it becomes stackless.

There is also a strong connection to the "reduce" or "fold" function in
functional programming. We can linguistically identify the "reduce" in a
LALR(1) parser with "reduce", the function.

"reduce" takes an accumulator, initialized to some value, and then
decimates a sequence by repeatedly passing the accumulator as
the left argument to a function, and the successive items of the
sequence as the right argument. For each successive call, the return
value of the previous call is used as the accumulator.

If we write a left-recursive calculator using a parser generator, say
with addition as the binary op:

      expr : expr '+' expr { $$ = $1 + $3; }
                | expr { $$ = $1; }

    expr : initial_value { $$ = $1 }

this behaves like an iterative reduce over an input like '1 + 2 + 3 ...'.

The accumulator is seeded with 1, and then threaded through the
successive reductions without consuming parser stack space.

Fold/reduce, grammar reductions, continuations and (tail-)recursion
are all closely related.

If a compiler's target run-time is continuation-based, then stackless
tail calls are trivial to implement. All functions return by invoking
their continuation already, and so to make a tail call to a function,
you just call it, and give it your *own* continuation as the
continuation argument. If that function invokes that continuationm, it
will "return" to wherever you would have returned.
To generate a regular non-tail call which will return back to the
caller, the caller captures a local continuation and gives the callee
that one.

The accumulator object in a reduce is a kind of continuation: it
summarizes everything that has been done so far, so the calculation can
continue withut having to regress anywhere. Old values of the
accumulator are never revisited, so the reduce job can be done
iteratively: by assigning the new accumulator value over the old one.
That is easily achieved without assignment by tail recursion.

TXR Programming Lanuage:
Music DIY Mailing List:
ADA MP-1 Mailing List:

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.