Re: Table compression

Chris F Clark <cfc@shell01.TheWorld.com>
3 Oct 2005 00:23:01 -0400

          From comp.compilers

Related articles
[5 earlier articles]
Table compression leonardo@dcc.ufmg.br (Leonardo Teixeira Passos) (2005-09-27)
Re: Table compression anton@mips.complang.tuwien.ac.at (2005-09-30)
Re: Table compression hannah@schlund.de (2005-09-30)
Re: Table compression cleos@nb.sympatico.ca (Cleo Saulnier) (2005-09-30)
Re: Table compression Peter_Flass@Yahoo.com (Peter Flass) (2005-10-02)
Re: Table compression paul@parsetec.com (Paul Mann) (2005-10-02)
Re: Table compression cfc@shell01.TheWorld.com (Chris F Clark) (2005-10-03)
Re: Table compression henry@spsystems.net (2005-10-13)
| List of all articles for this month |

From: Chris F Clark <cfc@shell01.TheWorld.com>
Newsgroups: comp.compilers
Date: 3 Oct 2005 00:23:01 -0400
Organization: The World Public Access UNIX, Brookline, MA
References: 05-09-130 05-09-138 05-10-006
Keywords: parse, practice
Posted-Date: 03 Oct 2005 00:23:01 EDT

Peter Flass <Peter_Flass@Yahoo.com> writes:


> All other things being equal, it's *always* better to be efficient.
> Obviously there are trade-offs, but frequently it doesn't take much
> more effort, if any, to do things the right way. It's just lazy to
> say that with (big memories)|(fast CPUs)|(<fill in the blanks>) it
> doesn't matter.


However, efficiency is not an obvious thing. In Yacc++, we implement
3 levels of table compression. We call them fast, readable, and
small. The fast tables are laid out in the general uncompressed
matrix style with everything tuned to minimzing memory references per
action. The small tables have compression applied to them, and
require the engine to "binary search" to the desired entry. The
readable tables have some very modest compression, but still are
sufficiently array-like to involve no searching, but do require more
steps (and more memory references) to get to the desired entry than
fast does. We could do more.


(And, I would 2nd Paul Mann's recommendation. That is a wonderful
article that describes a lot of table compression choices. If you
want to do compression, you can atleast take 1 of their suggestions.
If you want to advance the field, you need to consider all the
alternatives they have already worked out.)


Now, the reason I say things are not obvious in terms of efficiency,
is that the fast tables have a larger total memory footprint than
readable, about 3x. However, in terms of locations actually
referenced, the footprint is smaller, about 1/2. If the fast tables
are large enough, the larger total footprint could induce page faults,
a loss. If the tables are small enough, page faults aren't an issue
and you might get fewer cache misses with the fast tables.


The only way to tell is to measure. Optimization without measurement
is essentially folly. Back when I worked at DEC on the Unix Alpha
optimizer, we held off for about 6 months turning on an optimization,
because it interacted poorly with the other optimizations and slowed
an important (SPEC) benchmark down, because it moved a loop to an
unaligned location, even though it saved overall instructions. We
only turned it on when we had done enough other optimizations that we
could get rid of that bad artifact.


It is worth noting that recursive descent parsers are never optimized
for size. They are simply laid out as instructions in the target
language. Many RD parsers are quite speedy. One doesn't pay any
costs for table lookup at all. The code is all there for the machine
to execute, simply by following the PC (program counter).


We've experimented on that with Yacc++ too. We call the table type
case, because we lay the states out as case statements. That also
allows us to do some other optimizations because we know how the code
executes in context and to skip certain steps the one can't skip in
the general case. For the cases we have tested, the resulting code is
significantly faster. However, again it has a larger total footprint.
For sufficiently large grammars and sufficiently unrobust (host)
compilers, I could see the result as a failure of the code to compile.


Of course, runtime performance isn't the only good quality. The
reason RD parsers are implemented without tables is not to get rid of
the cost of the table lookups, but to make the code more readable.
That is a different kid of efficiency, which is another way of saying
all other things are not always equal.


Hope this helps,
-Chris


*****************************************************************************
Chris Clark Internet : compres@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)


Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.