|Assemblers can be fast yale!harvard!ames!hplabs!hpccc!culberts (Bruce Culbertson) (1987-12-16)
|Re: Assemblers can be fast ihnp4!usl!usl-pc!jpdres10 (Green Eric Lee) (1987-12-19)
|Re: Assemblers can be fast ucbvax.Berkeley.EDU!sun!wrs!dg (1987-12-22)
|Re: Assemblers can be fast firstname.lastname@example.org (1988-01-04)
|Re: Assemblers can be fast email@example.com (1988-01-15)
|Wed, 16 Dec 87 13:39:48 pst
|Bruce Culbertson <yale!harvard!ames!hplabs!hpccc!culberts>
Several people have implied recently that assemblers are necessarily
slow because they require two or more scans of the source being assembled.
I recently wrote an assembler for the NS32000. In the process, I concluded
that much of what I have read about how assemblers ought to be designed is
out-of-date. In particular, I do not believe assemblers ought to look at
the source more than once. I doubt that what I am about to describe is new.
However, the slow speed and various other properties of some very popular
assemblers (especially for MSDOS) lead me to believe that the traditional
two pass assembler is still common today.
The first phase of my assembler scans the source, emitting what it can of
the object, and creating data structures which allow the final, correct,
object to be emitted once the whole source has been seen. In particular,
a directed graph is built whose nodes are symbols. An edge exists from
node A to node B if A is defined by an expression which references B.
Expressions which cannot be resolved when they are seen in the source
are stored in a compact, tokenized, reverse Polish form.
During the first phase, all forward branch displacements are assumed to
require four bytes. Phase two reduces the length of displacements, where
possible, to one or two bytes. For this purpose, preliminary addresses of
labels are recorded during phase one; the addresses are in general too large
because the labels may be preceded by displacements which can be reduced.
Because reducing one displacement can never increase another, the
approximate addresses are sufficient for determining displacement sizes.
At the end of phase two, the final addresses of all labels are known.
Phase three evaluates all expressions which were unresolved during phase
one and back patches the object. If expression E references unresolved
symbol S, then the expression for S is recursively evaluated before
proceeding with the evaluation of E. Thus, chains of symbols which have
forward references to other symbols may be unbounded in length; this
is not true for the traditional two pass assembler.
Since users request listings infrequently, I decided it was permissible
to reread the source to produce a listing (though the source does not
need to be parsed). An optional fourth pass merges the object with the
source for this purpose.
I assume the two pass assembler evolved because it could assemble very
large programs on computers with very limited memory. Since we no longer
use machines with small memories, I think the approach I took makes more
sense for a modern assembler.
[It's true, assemblers used to run in real memory just like any other
program. I've used very credible assemblers that ran in 4K. But if you
have infinite memory, sure, what the heck, buffer it all in memory. The
scheme in your pass two is the same one that existing Unix assemblers use
now to do branch optimization. It's not quite optimal, but the extra work
to ensure optimality isn't worth it. But be prepared for users to punch
you in the nose when they add two lines to their assembler program and
the assembler says "you lose, can't buffer it in 640K." -John]
Return to the
Search the comp.compilers archives again.