Re: Bit swizzling

"Rick C. Hodgin" <>
Mon, 7 Sep 2020 10:55:08 -0400

          From comp.compilers

Related articles
Bit swizzling (Rick C. Hodgin) (2020-09-05)
Re: Bit swizzling (Hans-Peter Diettrich) (2020-09-05)
Re: Bit Swizzling (John Levine) (2020-09-05)
Re: Bit swizzling (Kaz Kylheku) (2020-09-05)
Re: Bit swizzling ( (2020-09-06)
Re: Bit Swizzling (Chris) (2020-09-06)
Re: Bit swizzling (Martin Ward) (2020-09-07)
Re: Bit swizzling (Rick C. Hodgin) (2020-09-07)
Re: Bit swizzling (Hans-Peter Diettrich) (2020-09-08)
Re: Bit swizzling (Tom Crick) (2020-09-08)
Re: Bit swizzling (gah4) (2020-09-10)
Re: Bit swizzling (Rick C. Hodgin) (2020-09-10)
| List of all articles for this month |

From: "Rick C. Hodgin" <>
Newsgroups: comp.compilers
Date: Mon, 7 Sep 2020 10:55:08 -0400
Organization: Compilers Central
References: 20-09-014 <>
Injection-Info:; posting-host=""; logging-data="26185"; mail-complaints-to=""
Keywords: optimize
Posted-Date: 07 Sep 2020 13:58:13 EDT
In-Reply-To: <>
Content-Language: en-US

Thank you, Martin.

I saw a Google developer talk on C/C++ where they discussed generating
optimum code for a given logic flow (as dictated by source code).

I remember thinking, what if your base logic breakout isn't optimum?
What if the way the developer was designing the algorithm wasn't optimum?
What if the entire premise of what you're trying to do needed to be first
broken down into fundamentals, and then re-assembled in a truly optimum

There are cases where, if you know specific input ranges, you could sub-
stitute the algorithm with something completely unlike the actual function,
yet yield the same results.

One such idea is writing to a port, and reading back from another port,
where the external thing does the processing for you as a co-processor or
custom ASIC.  The actual 6800 algorithm as dictated by the developer may
have been in C code or other lagugae, and would've used Nn asm opcodes to
conduct the same workload with even an optimizing compiler.  But an intel-
ligent compiler, one that could break down the operations to fundamentals,
understand what's trying to be accomplished, and then re-engineer the
logic to accomodate the abilities which exist (such as those extra read/
write ports), is what's really quired.

In some algorithm cases, the nature of what you're trying to do may be
handled fully by a completely unexpected series of instructions.

I've long desired for my CAlive language to support that type of opti-
mization.  Walter Banks and I used to discuss the possibility, and had
he lived we would've continued down that line.  It goes along with
another fundamental idea I have for processing that I would like to
explore someday ... if there's time.

I appreciate your response.

Rick C. Hodgin

On 9/7/20 7:08 AM, Martin Ward wrote:
  > On 05/09/2020 17:05, Rick C. Hodgin wrote:
  >> Are there any existing algorithms which examine the operations that
  >> must be conducted and then create an optimized / minimal sequence of
  >> mechanical steps to conduct it given a constrained set of features
  >> (such as those present on a given CPU)?
  > The process you are describing is called "Superoptimization":
  > finding the optimal  code sequence for one loop-free sequence
  > of instructions.
  > The term superoptimization was first coined by Alexia Massalin in the
1987 paper "Superoptimizer: A Look at the Smallest Program":
  > Back in 1980 there was a discussion in the magazine
  > "Pratical Electronics" regarding the shortest sequence of
  > instructions needed to reverse the bits in the accumulator
  > for a 6800 system (an operation that is needed in some Fast Fourier
  > Transform algorithms).
  > One solution (given in June 1980, page 70) consisted of just
  > two instructions and used no additional data.
  > The instructions were:
  > STAA PORTA  write to output port
  > LDAB PORTB  read from input port
  > with the output port being wired directly to the input port with
  > the bits reversed!

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.