![](http://image.absoluteastronomy.com/images//topicimages/noimage.gif)
Wagner–Fischer algorithm
Encyclopedia
The Wagner–Fischer algorithm is a dynamic programming
algorithm that measures the Levenshtein distance
between two strings of characters.
to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood fill
ing the matrix, and thus find the distance between the two full strings as the last value computed.
A straightforward implementation, as pseudocode
for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:
int LevenshteinDistance(char s[1..m], char t[1..n])
{
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t;
// note that d has (m+1)x(n+1) values
declare int d[0..m, 0..n]
for i from 0 to m
d[i, 0] := i // the distance of any first string to an empty second string
for j from 0 to n
d[0, j] := j // the distance of any second string to an empty first string
for j from 1 to n
{
for i from 1 to m
{
if s[i] = t[j] then
d[i, j] := d[i-1, j-1] // no operation required
else
d[i, j] := minimum
(
d[i-1, j] + 1, // a deletion
d[i, j-1] + 1, // an insertion
d[i-1, j-1] + 1 // a substitution
)
}
}
return d[m,n]
}
Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number):
The invariant
maintained throughout the algorithm is that we can transform the initial segment
is that we can transform the initial segment
This proof fails to validate that the number placed in
in which we assume
Dynamic programming
In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller and optimal substructure...
algorithm that measures the Levenshtein distance
Levenshtein distance
In information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences...
between two strings of characters.
Calculating distance
The Wagner-Fischer algorithm computes Levenshtein distance based on the observation that if we reserve a matrixMatrix (mathematics)
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions. The individual items in a matrix are called its elements or entries. An example of a matrix with six elements isMatrices of the same size can be added or subtracted element by element...
to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood fill
Flood fill
Flood fill, also called seed fill, is an algorithm that determines the area connected to a given node in a multi-dimensional array. It is used in the "bucket" fill tool of paint programs to determine which parts of a bitmap to fill with color, and in games such as Go and Minesweeper for determining...
ing the matrix, and thus find the distance between the two full strings as the last value computed.
A straightforward implementation, as pseudocode
Pseudocode
In computer science and numerical computation, pseudocode is a compact and informal high-level description of the operating principle of a computer program or other algorithm. It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading...
for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:
int LevenshteinDistance(char s[1..m], char t[1..n])
{
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t;
// note that d has (m+1)x(n+1) values
declare int d[0..m, 0..n]
for i from 0 to m
d[i, 0] := i // the distance of any first string to an empty second string
for j from 0 to n
d[0, j] := j // the distance of any second string to an empty first string
for j from 1 to n
{
for i from 1 to m
{
if s[i] = t[j] then
d[i, j] := d[i-1, j-1] // no operation required
else
d[i, j] := minimum
(
d[i-1, j] + 1, // a deletion
d[i, j-1] + 1, // an insertion
d[i-1, j-1] + 1 // a substitution
)
}
}
return d[m,n]
}
Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number):
|
|
The invariant
Invariant (mathematics)
In mathematics, an invariant is a property of a class of mathematical objects that remains unchanged when transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used...
maintained throughout the algorithm is that we can transform the initial segment
s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. At the end, the bottom-right element of the array contains the answer.Proof of correctness
As mentioned earlier, the invariantInvariant (mathematics)
In mathematics, an invariant is a property of a class of mathematical objects that remains unchanged when transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used...
is that we can transform the initial segment
s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. This invariant holds since:
- It is initially true on row and column 0 because
s[1..i]
can be transformed into the empty stringt[1..0]
by simply dropping alli
characters. Similarly, we can transforms[1..0]
tot[1..j]
by simply adding allj
characters. - If
s[i] = t[j]
, and we can transforms[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
and just leave the last character alone, givingk
operations. - Otherwise, the distance is the minimum of the three possible ways to do the transformation:
- If we can transform
s[1..i]
tot[1..j-1]
ink
operations, then we can simply addt[j]
afterwards to gett[1..j]
ink+1
operations (insertion). - If we can transform
s[1..i-1]
tot[1..j]
ink
operations, then we can removes[i]
and then do the same transformation, for a total ofk+1
operations (deletion). - If we can transform
s[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
, and exchange the originals[i]
fort[j]
afterwards, for a total ofk+1
operations (substitution).
- If we can transform
- The operations required to transform
s[1..n]
intot[1..m]
is of course the number required to transform all ofs
into all oft
, and sod[n,m]
holds our result.
This proof fails to validate that the number placed in
d[i,j]
is in fact minimal; this is more difficult to show, and involves an argument by contradictionReductio ad absurdum
In logic, proof by contradiction is a form of proof that establishes the truth or validity of a proposition by showing that the proposition's being false would imply a contradiction...
in which we assume
d[i,j]
is smaller than the minimum of the three, and use this to show one of the three is not minimal.Possible improvements
Possible improvements to this algorithm include:- We can adapt the algorithm to use less space, OBig O notationIn mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann-Landau notation, or...
(m) instead of O(mn), since it only requires that the previous row and current row be stored at any one time. - We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always
j
. - We can normalize the distance to the interval
[0,1]
. - If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width 2k+1 in the matrix. In this way, the algorithm can be run in OBig O notationIn mathematics, big O notation is used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. It is a member of a larger family of notations that is called Landau notation, Bachmann-Landau notation, or...
(kl) time, where l is the length of the shortest string. - We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted.
- By initializing the first row of the matrix with 0, the algorithm can be used for fuzzy string search of a string in a text. This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.
- This algorithm parallelizesParallel computingParallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...
poorly, due to a large number of data dependenciesData dependencyA data dependency in computer science is a situation in which a program statement refers to the data of a preceding statement. In compiler theory, the technique used to discover data dependencies among statements is called dependence analysis.There are three types of dependencies: data, name, and...
. However, all thecost
values can be computed in parallel, and the algorithm can be adapted to perform theminimum
function in phases to eliminate dependencies. - By examining diagonals instead of rows, and by using lazy evaluationLazy evaluationIn programming language theory, lazy evaluation or call-by-need is an evaluation strategy which delays the evaluation of an expression until the value of this is actually required and which also avoids repeated evaluations...
, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.
Upper and lower bounds
The Levenshtein distance has several simple upper and lower bounds that are useful in applications which compute many of them and compare them. These include:- It is always at least the difference of the sizes of the two strings.
- It is at most the length of the longer string.
- It is zero if and only if the strings are identical.
- If the strings are the same size, the Hamming distanceHamming distanceIn information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different...
is an upper bound on the Levenshtein distance.
Reference
- R.A. Wagner and M.J. Fischer. 1974. The String-to-String Correction Problem. Journal of the ACM, 21(1):168–173.