Re: Managing the JIT

"BGB / cr88192" <>
Sun, 2 Aug 2009 14:58:11 -0700

          From comp.compilers

Related articles
[5 earlier articles]
Re: Managing the JIT (BGB / cr88192) (2009-07-27)
Re: Managing the JIT (BGB / cr88192) (2009-07-28)
Re: Managing the JIT (Armel) (2009-07-29)
Re: Managing the JIT (BGB / cr88192) (2009-07-30)
Re: Managing the JIT (Armel) (2009-07-31)
Re: Managing the JIT (Barry Kelly) (2009-08-01)
Re: Managing the JIT (BGB / cr88192) (2009-08-02)
Re: Managing the JIT (BGB / cr88192) (2009-08-02)
Re: Managing the JIT (Aleksey Demakov) (2009-08-07)
Re: Managing the JIT (BGB / cr88192) (2009-08-08)
| List of all articles for this month |

From: "BGB / cr88192" <>
Newsgroups: comp.compilers
Date: Sun, 2 Aug 2009 14:58:11 -0700
References: 09-07-079 09-07-093 09-07-108 09-07-113 09-07-117 09-08-001
Keywords: incremental, code
Posted-Date: 06 Aug 2009 13:59:23 EDT

"Armel" <> wrote in message
> From: "BGB / cr88192" <>
>> "Armel" <> wrote in message
>> however, the textual interface provides capabilities not available if
>> direct
>> function calls were used, such as using multi-pass compaction (AKA: the
>> first pass assumes all jumps/... to be full length, but additional passes
>> allow safely compacting the jumps).
> why a binary-near interface could not do compaction? when I describe the
> with something like jit.move(ax,25), it does not preclude the usage of
> higher level semantics such as jit.jump(label_id), and later write
> jit.label(label_id), and finally write jit.end( ) to close the function
> being compiled, optimizing all jumps and finalizing labels.

presumably the whole point of using a function-call driven API would
be that a single-pass approach would be used (and have the API
directly driving machine-code production).

typically, compacting jumps requires multiple passes, where each pass
can see where the label was in the last pass. in this way, the code
would be able to see how far the jump is.

with a single pass, the exact distance of forwards jumps can't be
known, and so the opcode would likely have to assume the largest
reasonable size, or require the size to be explicit.

>> granted, my internal code does not go about attempting nearly so nice an
>> interface as asmjit (with a single function per instruction, ...),
>> rather,
>> the interface is a good deal more terrible...
>> void BASM_OutOpGeneric2(BASM_Context *ctx, int op, int w,
>> char *lbl0, int breg0, int ireg0, int sc0, long long disp0,
>> char *lbl1, int breg1, int ireg1, int sc1, long long disp1);
> asmjit is just having the "op" part into some other functions of a form
> which helps the user avoid wrong calls that's all what I expect.

yes, however, my assembler does not have per-opcode machinery.
most of it is, internally, fairly "generic"...

>> what does the binary interface buy you?...
>> i=3;
>> basm_print("mov rax, %d\n", i);

> it is for too easy IMHO to write something which compiles but does
> not run. (rax, i) ensures me immediately at compile time
> that there is really such an instruction, that I could not make a
> mistake while writing. it seems less error prone to me.


as is though, the assembler will reject code if it is ill-formed...

I had thought of it, and such a function-driven mechanism could be faked
with macros...

>> note that wrapping every single opcode with a function would likely
>> be far more work than writing most of the assembler.

> hehe, my place here is as a _user_ of a JIT not as a conceptor, i do not
> care if they add more or less work.

originally, I had tried using function calls (granted, in a slightly
lower-level form), and at the time, I had found it to be far more awkward
than was worthwhile...

it was not long before I switched to a textual interface...

>> [...]
>> the overal performance difference either way is likely to be small, as in
>> this case, the internal processing is likely to outweigh the cost of
>> parsing
>> (figuring out which opcode to use, ...).
> yes probably, parsing generated assembler is really easy (not real for
> ultra
> good error reporting, no need for macro handling...).


granted though, a bad opcode, or bad opcode args, will cause the assembler
to reject the ASM code...

eventually though, in my case I did end up adding macro support, but this
was mostly as a convinience for when dealing with larger chunks of ASM...

the main use would be for built-in handlers, which could use macros to
assemble different code for different architectures and CPU features.

note that my compiler itself does not use them, as it typically generates
code directly for the target processor in question...

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.