Normalized loop
Encyclopedia
In computer science
, a normalized loop (sometimes called well-behaved loop), is a loop which the loop variable starts at 0 (or any constant) and get incremented by one at every iteration until the exit condition is met. Normalized loops are very important for compiler theory, loop dependence analysis
as they simplify the data dependence analysis.
Because the increment is unitary and constant, it's very easy to see that, if both a and b are bigger than MAX, this loop will never access memory outside the allocated range.
A simple example, where it doesn't start at the beginning and increments by more than one:
A more complicated example, with an additional exit condition:
Loops can also have non-predictable behaviour during compilation time, where the exit condition depends on the contents of the data being modified:
Or even dynamic calculations by means of function calls:
Reverse loops are also very simple, and can be easily normalized:
While the third example can be partially normalized to allow some parallelization, but still lack the ability to know the loop span (how many iterations there will be), making it harder to vectorize by using multi-media hardware.
Starting at 7 is not so much of a problem, as long as the increment is regular, preferably one. When multiple statements inside the loop use the index, some private temporary variables may be created to cope with the different iteration paces.
The reverse loop (Example 5) is also easy to normalize:
Note that the access is still backwards. In this case, it makes no sense to leave it backwards (as there is no data dependence), but where dependences exist, caution must be taken to revert the access as well, as it could disrupt the order of assignments.
Each iteration, the loop will evaluate two functions (max and increment). Even if the functions are inlined, the condition becomes too complex to be worth optimizing. The programmer should take extra care not to create those loops unless strictly necessary (if ever).
Another danger of such loops appear if the evaluation depends on the data being modified. For instance, a normal error when using iterators is to remove items from a list while modifying it, or relying on sizes (for exit condition) that are not true any more.
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...
, a normalized loop (sometimes called well-behaved loop), is a loop which the loop variable starts at 0 (or any constant) and get incremented by one at every iteration until the exit condition is met. Normalized loops are very important for compiler theory, loop dependence analysis
Loop dependence analysis
In compiler theory, loop dependence analysis is the task of determining whether statements within a loop body form a dependence, with respect to array access and modification, induction, reduction and private variables, simplification of loop-independent code and management of conditional branches...
as they simplify the data dependence analysis.
Well-behaved loops
A well behaved loop is normally of the form:Because the increment is unitary and constant, it's very easy to see that, if both a and b are bigger than MAX, this loop will never access memory outside the allocated range.
Non-normalized loops
A non-normalized loop may begin at different indexes, increment by not-unitary amounts and have exit conditions complicated to define. Such loops are hard to optimize, vectorize and even traverse, especially if functions are executed on any part of the loop conditions.A simple example, where it doesn't start at the beginning and increments by more than one:
A more complicated example, with an additional exit condition:
Loops can also have non-predictable behaviour during compilation time, where the exit condition depends on the contents of the data being modified:
Or even dynamic calculations by means of function calls:
Reverse loops are also very simple, and can be easily normalized:
Converting to a normalized loop
If the non-normalized doesn't have dynamic behaviour, it's normally very easy to transform it to a normalized one. For instance, the fist example (Example 1) above can easily be converted to:While the third example can be partially normalized to allow some parallelization, but still lack the ability to know the loop span (how many iterations there will be), making it harder to vectorize by using multi-media hardware.
Starting at 7 is not so much of a problem, as long as the increment is regular, preferably one. When multiple statements inside the loop use the index, some private temporary variables may be created to cope with the different iteration paces.
The reverse loop (Example 5) is also easy to normalize:
Note that the access is still backwards. In this case, it makes no sense to leave it backwards (as there is no data dependence), but where dependences exist, caution must be taken to revert the access as well, as it could disrupt the order of assignments.
Impossible conversions
The last example above makes it impossible to predict anything from that loop. Unless the functions themselves are trivial (constant), there is no way to know where the loop will start, stop and how much it'll increment each iteration. Those loops are not only hard to parallelize, but they also perform horribly.Each iteration, the loop will evaluate two functions (max and increment). Even if the functions are inlined, the condition becomes too complex to be worth optimizing. The programmer should take extra care not to create those loops unless strictly necessary (if ever).
Another danger of such loops appear if the evaluation depends on the data being modified. For instance, a normal error when using iterators is to remove items from a list while modifying it, or relying on sizes (for exit condition) that are not true any more.
See also
- Dependence analysisDependence analysisIn compiler theory, dependence analysis produces execution-order constraints between statements/instructions. Broadly speaking, a statement S2 depends on S1 if S1 must be executed before S2...
- Loop transformation
- Loop splittingLoop splittingLoop splitting is a compiler optimization technique. It attempts to simplify a loop or eliminate dependencies by breaking it into multiple loops which have the same bodies but iterate over different contiguous portions of the index range.-Loop peeling:...
- Loop fusionLoop fusionLoop fusion, also called loop jamming, is a compiler optimization, a loop transformation, which replaces multiple loops with a single one.- Example in C : int i, a[100], b[100]; for Loop fusion, also called loop jamming, is a compiler optimization, a loop transformation, which replaces multiple...
- Loop interchangeLoop interchangeIn compiler theory, loop interchange is the process of exchanging the order of two iteration variables.For example, in the code fragment: for i from 0 to 10 for j from 0 to 20 a[i,j] = i + jloop interchange would result in: for j from 0 to 20...
- Loop skewing
- automatic parallelizationAutomatic parallelizationAutomatic parallelization, also auto parallelization, autoparallelization, or parallelization, the last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized code in order to utilize multiple processors simultaneously in a...
- Vectorization (parallel computing)
- Loop dependence analysisLoop dependence analysisIn compiler theory, loop dependence analysis is the task of determining whether statements within a loop body form a dependence, with respect to array access and modification, induction, reduction and private variables, simplification of loop-independent code and management of conditional branches...