LINPACK
Encyclopedia
LINPACK is a software library
for performing numerical linear algebra
on digital computers. It was written in Fortran
by Jack Dongarra
, Jim Bunch, Cleve Moler
, and Gilbert Stewart, and was intended for use on supercomputer
s in the 1970s and early 1980s. It has been largely superseded by LAPACK
, which runs more efficiently on modern architectures.
LINPACK makes use of the BLAS
(Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations.
The LINPACK Benchmarks are a measure of a system's floating point
computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense N by N system of linear equations Ax = b, which is a common task in engineering
. The solution is obtained by Gaussian elimination
with partial pivoting
, with 2/3·N3 + 2·N2 floating point operations. The result is reported in millions of floating point operations per second (MFLOPS).
For large-scale distributed-memory systems, High Performance Linpack, a portable implementation of the High-Performance LINPACK Benchmark, is used as a performance measure for ranking supercomputers in the TOP500
list of the world's fastest computers.
(There is now also a Green500 list ranking the machines on the TOP500 list based on energy efficiency, in FLOPs per Watt.)
The HPC benchmark is run for different matrix sizes N searching for the size Nmax for which the maximal performance Rmax is obtained. The benchmark also reports the problem size N1/2 where half of the performance (Rmax/2) is achieved.
One of the time-consuming routines in Linpack is DAXPY
. Computer architects design systems to optimize DAXPY
so as to obtain a higher Linpack score.
There are unverified complaints that Linpack is not a good supercomputer benchmark because it does not stress the interconnect between nodes but focuses on floating-point arithmetic units and cache memory. The high throughput, low delay interconnect is the defining feature that traditionally separates proper supercomputers from mere computer clusters (even when they consist of thousands of computing nodes)
.
Thom Dunning, director of the National Center for Supercomputing Applications, had this to say about LINPACK: "The Linpack benchmark is one of those interesting phenomena -- almost anyone who knows about it will deride its utility. They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."
Library (computer science)
In computer science, a library is a collection of resources used to develop software. These may include pre-written code and subroutines, classes, values or type specifications....
for performing numerical linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...
on digital computers. It was written in Fortran
Fortran
Fortran is a general-purpose, procedural, imperative programming language that is especially suited to numeric computation and scientific computing...
by Jack Dongarra
Jack Dongarra
Jack J. Dongarra is a University Distinguished Professor of Computer Sciencein the Electrical Engineering and Computer Science Department at the University of Tennessee...
, Jim Bunch, Cleve Moler
Cleve Moler
Cleve Barry Moler is a mathematician and computer programmer specializing in numerical analysis. In the mid to late 1970s, he was one of the authors of LINPACK and EISPACK, Fortran libraries for numerical computing. He invented MATLAB, a numerical computing package, to give his students at the...
, and Gilbert Stewart, and was intended for use on supercomputer
Supercomputer
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling A supercomputer is a...
s in the 1970s and early 1980s. It has been largely superseded by LAPACK
LAPACK
-External links:* : a modern replacement for PLAPACK and ScaLAPACK* on Netlib.org* * * : a modern replacement for LAPACK that is MultiGPU ready* on Sourceforge.net* * optimized LAPACK for Solaris OS on SPARC/x86/x64 and Linux* * *...
, which runs more efficiently on modern architectures.
LINPACK makes use of the BLAS
Basic Linear Algebra Subprograms
Basic Linear Algebra Subprograms is a de facto application programming interface standard for publishing libraries to perform basic linear algebra operations such as vector and matrix multiplication. They were first published in 1979, and are used to build larger packages such as LAPACK...
(Basic Linear Algebra Subprograms) libraries for performing basic vector and matrix operations.
The LINPACK Benchmarks are a measure of a system's floating point
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense N by N system of linear equations Ax = b, which is a common task in engineering
Engineering
Engineering is the discipline, art, skill and profession of acquiring and applying scientific, mathematical, economic, social, and practical knowledge, in order to design and build structures, machines, devices, systems, materials and processes that safely realize improvements to the lives of...
. The solution is obtained by Gaussian elimination
Gaussian elimination
In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
with partial pivoting
Pivot element
The pivot or pivot element is the element of a matrix, an array, or some other kind of finite set, which is selected first by an algorithm , to do certain calculations...
, with 2/3·N3 + 2·N2 floating point operations. The result is reported in millions of floating point operations per second (MFLOPS).
For large-scale distributed-memory systems, High Performance Linpack, a portable implementation of the High-Performance LINPACK Benchmark, is used as a performance measure for ranking supercomputers in the TOP500
TOP500
The TOP500 project ranks and details the 500 most powerful known computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year...
list of the world's fastest computers.
(There is now also a Green500 list ranking the machines on the TOP500 list based on energy efficiency, in FLOPs per Watt.)
The HPC benchmark is run for different matrix sizes N searching for the size Nmax for which the maximal performance Rmax is obtained. The benchmark also reports the problem size N1/2 where half of the performance (Rmax/2) is achieved.
One of the time-consuming routines in Linpack is DAXPY
SAXPY
SAXPY is one of the functions in the Basic Linear Algebra Subprograms package, and is a common operation in computations with vector processors...
. Computer architects design systems to optimize DAXPY
SAXPY
SAXPY is one of the functions in the Basic Linear Algebra Subprograms package, and is a common operation in computations with vector processors...
so as to obtain a higher Linpack score.
There are unverified complaints that Linpack is not a good supercomputer benchmark because it does not stress the interconnect between nodes but focuses on floating-point arithmetic units and cache memory. The high throughput, low delay interconnect is the defining feature that traditionally separates proper supercomputers from mere computer clusters (even when they consist of thousands of computing nodes)
.
Thom Dunning, director of the National Center for Supercomputing Applications, had this to say about LINPACK: "The Linpack benchmark is one of those interesting phenomena -- almost anyone who knows about it will deride its utility. They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."
External links
- http://www.netlib.org/linpack/
- http://www.netlib.org/blas/
- http://www.netlib.org/benchmark/linpackjava/ a web-based LINPACK benchmark
- http://www.netlib.org/benchmark/hpl/ The HPL benchmark used in the TOP500TOP500The TOP500 project ranks and details the 500 most powerful known computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year...
- Jack Dongarra finds 1985 Cray-2 has same Linpack as iPad2