Data parallelism
Encyclopedia
Data parallelism is a form of parallelization of computing across multiple processor
s in parallel computing
environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It contrasts to task parallelism
as another form of parallelism.
), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.
For instance, consider a 2-processor system (CPUs A and B) in a parallel environment, and we wish to do a task on some data
s as described below. As a specific example, consider adding two matrices. In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.
Data parallelism emphasizes the distributed (parallelized) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.
Naresh
—which applies some arbitrary operation,
define foo
if CPU = "a"
lower_limit := 1
upper_limit := round(d.length/2)
else if CPU = "b"
lower_limit := round(d.length/2) + 1
upper_limit := d.length
for i from lower_limit to upper_limit by 1
foo(d[i])
If the above example program is executed on a 2-processor system the runtime environment may execute it as follows:
This concept can be generalized to any number of processors. However, when the number of processors increases, it may be helpful to restructure the program in a similar way (where
for i from cpuid to d.length by number_of_cpus
foo(d[i])
For example, on a 2-processor system CPU A (
The code below illustrates Data parallelism on the JVM:
Branches in a parallel composition can be quantified. This is used to perform the || operator on all elements of an array or a collection:
The equivalent sequential code would be:
Quantification can introduce an arbitrary number of generators (iterators) and filters. Here is how we would update the upper left triangle of a matrix:
Central processing unit
The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...
s in parallel computing
Parallel computing
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...
environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It contrasts to task parallelism
Task parallelism
Task parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes across different parallel computing nodes...
as another form of parallelism.
Description
In a multiprocessor system executing a single set of instructions (SIMDSIMD
Single instruction, multiple data , is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously...
), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.
For instance, consider a 2-processor system (CPUs A and B) in a parallel environment, and we wish to do a task on some data
d
. It is possible to tell CPU A to do that task on one part of d
and CPU B on another part simultaneously, thereby reducing the duration of the execution. The data can be assigned using conditional statementConditional statement
In computer science, conditional statements, conditional expressions and conditional constructs are features of a programming language which perform different computations or actions depending on whether a programmer-specified boolean condition evaluates to true or false...
s as described below. As a specific example, consider adding two matrices. In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.
Data parallelism emphasizes the distributed (parallelized) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.
Naresh
Example
The program below expressed in pseudocodePseudocode
In computer science and numerical computation, pseudocode is a compact and informal high-level description of the operating principle of a computer program or other algorithm. It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading...
—which applies some arbitrary operation,
foo
, on every element in the array d
—illustrates data parallelism:Some input data (e.g. when d.length
evaluates to 1 and round
rounds towards zero [this is just an example, there are no requirements on what type of rounding is used]) will lead to lower_limit
being greater than upper_limit
, it's assumed that the loop will exit immediately (i.e. zero iterations will occur) when this happens.define foo
if CPU = "a"
lower_limit := 1
upper_limit := round(d.length/2)
else if CPU = "b"
lower_limit := round(d.length/2) + 1
upper_limit := d.length
for i from lower_limit to upper_limit by 1
foo(d[i])
If the above example program is executed on a 2-processor system the runtime environment may execute it as follows:
- In an SPMDSPMDIn computing, SPMD is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results faster. SPMD is the most common style of parallel programming...
system, both CPUs will execute the code. - In a parallel environment, both will have access to
d
. - A mechanism is presumed to be in place whereby each CPU will create its own copy of
lower_limit
andupper_limit
that is independent of the other. - The
if
clause differentiates between the CPUs. CPU"a"
will read true on theif
; and CPU"b"
will read true on theelse if
, thus having their own values oflower_limit
and
upper_limit
.
- Now, both CPUs execute
foo(d[i])
, but since each CPU has different values of the limits, they operate on different parts ofd
simultaneously, thereby distributing the task among themselves. Obviously, this will be faster than doing it on a single CPU.
This concept can be generalized to any number of processors. However, when the number of processors increases, it may be helpful to restructure the program in a similar way (where
cpuid
is an integer between 1 and the number of CPUs, and acts as a unique identifier for every CPU):for i from cpuid to d.length by number_of_cpus
foo(d[i])
For example, on a 2-processor system CPU A (
cpuid
1) will operate on odd entries and CPU B (cpuid
2) will operate on even entries.JVM Example
Similar to the previous example, Data Parallelism is also possible using the Java Virtual Machine JVM (using Ateji PX, an extension of Java).The code below illustrates Data parallelism on the JVM:
Branches in a parallel composition can be quantified. This is used to perform the || operator on all elements of an array or a collection:
The equivalent sequential code would be:
Quantification can introduce an arbitrary number of generators (iterators) and filters. Here is how we would update the upper left triangle of a matrix: