Skip to content
Related Articles
Get the best out of our app
Open App

Related Articles

The Lazy-Code-Motion problem

Improve Article
Save Article
Like Article
Improve Article
Save Article
Like Article

The Lazy-Code-Motion problem :
To avoid redundant calculations, reduce code size, or save resources, code mobility optimizations move computations across a control-flow graph (CFG). For example, loop invariant code motion recognizes expressions computed within loops that have the same value iteration after iteration and hoists them out of the loop so that they are only computed once. Instead of computing a subexpression ‘e’ twice in expressions f(e) and g(e), compute it once and store it in a temporary register.

main {
  one: int = const 1;
  y: int = add x one;
  z: int = add x one;

All paths through the program now include only one x+1 calculation. This is optimal code, at least in that dimension, and a reasonable outcome from partial redundancy removal or lazy code motion. So, what distinguishes lazy code motion from eager alternatives?

Make a note of the pressure :
In an architecture with a finite and fixed number of registers, compilers must allocate storage for a limited but unlimited number of variables when reducing IR code to assembly. Some variables will wind up on the stack if there are more variables than registers. Memory is slower than registers, therefore “spilling” to the stack is expensive.
The compiler should aim to reduce the amount of spills introduced during register allocation in earlier passes. The exact amount of spills introduced into a program is determined by the register allocation technique in use, making optimizing against this statistic a fool’s errand.

Variable definitions (computations) are moved further away from their uses by eager code motion, extending their life ranges. Any performance benefits owing to code motion can be easily clawed out due to the accompanying register pressure. Lazy code motion, rather than making computations as early as feasible, shifts them down to a later program point, avoiding unnecessary processing. In reality, one study establishes that lazy code motion places calculations “as late as possible, although this phrase is misleading when taken out of context. The system does a static analysis to find potential safe moves before selecting the most recent possibilities.

The Lazy Code Problem

Limitations :
Lexically equal expressions should always be placed in the same pseudoregister, according to the optimization. Later studies may make changes to the dataflow analyses to weaken this assumption. This causes superfluous move instructions to fetch computed values from temporaries, which increases register pressure. A more intelligent rewriting pass may be able to reduce these expenses.

The compute placement algorithm is inefficient at the other end of the optimization spectrum. Lazy code motion moves computations to the CFG’s edges, requiring new basic blocks to be stitched into the edges. While they are required in most cases, an inserted basic block can be securely merged with its predecessor or successor block on many edges. Because the amount of leaps would be reduced, this might improve performance. Similarly, the pretty-printer for CFGs does not omit leaps where fall-through would work—this may appear to be a trivial matter, but it could affect performance or code size. Both of these problems could be solved by running a simplification pass after the lazy code move.

Conclusion :
The amount of computations never rises after optimization, since lazy code motion is meant to avoid redundant expression computations. The cost of the cautious temporary allocation technique and basic block insertion appears to have a detrimental influence on the total number of calculations by adding movements and jumps. Due to computations being hoisted out of loops, loopy benchmarks (basic, hoist-thru-loop) show considerable speedups.

My Personal Notes arrow_drop_up
Last Updated : 17 Jun, 2021
Like Article
Save Article
Similar Reads