|Re: Which Prolog for writing Parsers/Generators? firstname.lastname@example.org (Paul Tarau) (1999-06-29)|
|From:||Paul Tarau <email@example.com>|
|Date:||29 Jun 1999 03:01:35 -0400|
|References:||<3770A6E3.9FC8CE46@denic.de> <3770CF12.8D348C47@lim.univ-mrs.fr> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>|
Jan Wielemaker wrote:
> Daniel Diaz wrote:
> >think there are a lot of good ideas in your system. However, to clarify
> >things for the "final user/programmer", I'd like to do some remarks about the
> >way both systems produce "executables".
> >On the other hand, when SWI-Prolog produces an executable it mainly generates
> >a shell script with enough information to restart SWI-Prolog in the same
> >context (dumping important parts of internal data structures). When such an
> >executable is run, it starts SWI-Prolog and reinitializes its memory with the
> >data previously stored. This is then similar to a "save+load state". The
> >consequence of this approach is that the performance of the "executable" is
> >not better than running the associated Prolog program under the top-level.
> >Furthermore, the "executable" is not stand alone since it needs to run
> >SWI-Prolog which then must be present at run-time.
> Partly true. SWI-Prolog knows of two types of executables. The first
> indeed is a shell-script that requires SWI-Prolog itself installed.
> The second however is produced using tbe option stand_alone(true) to
> qsave_program and the result is the emulator with the saved-state
> appended to it. This is a single file that will run without any part
> of SWI-Prolog around on any binary compatible platform.
> On the other hand, `byte-code' compiled Prolog isn't necessarily much
> slower then native code (see Quintus), while it is most certainly
> smaller. Admitted, SWI-Prolog is rather slow running pure deterministic
> Prolog (but fast on loading, database manipulation and meta-calling),
> but I beleive it should be possible to get close to native code without
> offering the benifits. Using GCC, the `dispatching' code is
> To exaggerate a little, native code outperform byte-code easily on
> naive-reverse lisp test, but not on large applications that do not
> spend most of their time in a few small expensive predicates.
There's indeed a fine balance to maintain between generating emulated
and native code, as the later might lead to serious code size
increases in Prolog (avoiding to include unused runtime stuff helps
with a nice constant factor, though, as it is the case in GNU Prolog).
Moreover, with the advent of integrated networking - overall execution
speed might depend as much on code size, dynamic (re)compilation
speed, code compression, code/data transmission protocols etc., as it
used to depend on the speed of the resulting binary code in the past.
Some form of adaptive compiler technology is needed to get the best of
the two worlds. BinProlog (see links from http://www.binnetcorp.com )
currently allows to control the amount of native code (through
compilation to C), byte code and interpreted dynamic code. The move
between interpreted and byte code is truly adaptive already (based on
run-time statistics about use/update ratios), while the compilation to
C vs. bytecode is still programmer controlled (this allows full
control on size vs. speed optimization).
However, in high level languages like Prolog or Java the future seems
to move towards a completely automated choice of code representation -
Java's relatively transparent use of JIT is a good example of this
trend, and, in client/server or mobile code frameworks - the choice
where the compilation itself or its acceleration occur - are likely to
be also subject of automation.
In fact, it matters very little to a programmer to know how actually
its code is executed - as far as a trusted automation tool does the
job of choosing between code representations consistently well.
just released: BinProlog 7.50 + Jinni with Prolog accelerator and GUI
Return to the
Search the comp.compilers archives again.