Invariant subspace
Encyclopedia
In mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...

, an invariant subspace of a linear mapping
T : VV


from some vector space
Vector space
A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex...

 V to itself is a subspace
Linear subspace
The concept of a linear subspace is important in linear algebra and related fields of mathematics.A linear subspace is usually called simply a subspace when the context serves to distinguish it from other kinds of subspaces....

 W of V such that T(W) is contained in W. An invariant subspace of T is also said to be T invariant.

If W is T-invariant, we can restrict T to W to arrive at a new linear mapping
T|W : WW.


Next we give a few immediate examples of invariant subspaces.

Certainly V itself, and the subspace {0}, are trivially invariant subspaces for every linear operator T : VV. For certain linear operators there is no non-trivial invariant subspace; consider for instance a rotation
Rotation (mathematics)
In geometry and linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a rigid body around a fixed point. A rotation is different from a translation, which has no fixed points, and from a reflection, which "flips" the bodies it is transforming...

 of a two-dimensional real vector space.

Let v be an eigenvector of T, i.e. T v = λv. Then W = span
Linear span
In the mathematical subfield of linear algebra, the linear span of a set of vectors in a vector space is the intersection of all subspaces containing that set...

 {v} is T invariant. As a consequence of the fundamental theorem of algebra
Fundamental theorem of algebra
The fundamental theorem of algebra states that every non-constant single-variable polynomial with complex coefficients has at least one complex root...

, every linear operator on a complex
Complex number
A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part...

 finite-dimensional
Dimension (vector space)
In mathematics, the dimension of a vector space V is the cardinality of a basis of V. It is sometimes called Hamel dimension or algebraic dimension to distinguish it from other types of dimension...

 vector space with dimension at least 2 has an eigenvector. Therefore every such linear operator has a non-trivial invariant subspace. The fact that the complex numbers are algebraically closed is required here. Comparing with the previous example, one can see that the invariant subspaces of a linear transformation are dependent upon the underlying scalar field of V.

An invariant vector (fixed point
Fixed point (mathematics)
In mathematics, a fixed point of a function is a point that is mapped to itself by the function. A set of fixed points is sometimes called a fixed set...

 of T), other than 0, spans an invariant subspace of dimension 1. An invariant subspace of dimension 1 will be acted on by T by a scalar, and consists of invariant vectors if and only if that scalar is 1.

As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite dimensional vector space over an algebraically closed field, linear transformations acting on V is characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T.

More generally, invariant subspaces are defined for sets of operators as subspaces invariant for each operator in the set. Let L(V) denote the algebra of linear transformations on V, and Lat(T) be the family of subspaces invariant under TL(V). (The "Lat" notation refers to the fact that Lat(T) forms a lattice
Lattice (order)
In mathematics, a lattice is a partially ordered set in which any two elements have a unique supremum and an infimum . Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities...

; see discussion below.) Give a nonempty set Σ ⊂ L(V), one considers the invariant subspaces invariant under each T ∈ Σ. In symbols,


For instance, it is clear that if Σ = L(V), then Lat(Σ) = { {0}, V}.

Given a representation
Group representation
In the mathematical field of representation theory, group representations describe abstract groups in terms of linear transformations of vector spaces; in particular, they can be used to represent group elements as matrices so that the group operation can be represented by matrix multiplication...

 of a group G on a vector space V, we have a linear transformation T(g) : VV for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation
Group representation
In the mathematical field of representation theory, group representations describe abstract groups in terms of linear transformations of vector spaces; in particular, they can be used to represent group elements as matrices so that the group operation can be represented by matrix multiplication...

 and the group G acts on W in a natural way.

As another example, let TL(V) and Σ be the algebra generated by {1, T}, where 1 is the identity operator. Then Lat(T) = Lat(Σ). Because T lies in Σ trivially, Lat(Σ) ⊂ Lat(T). On the other hand, Σ consists of polynomials in 1 and T, therefore the reverse inclusion holds as well.

Matrix representation

Over a finite dimensional vector space every linear transformation T : VV can be represented by a matrix once a basis
Basis (linear algebra)
In linear algebra, a basis is a set of linearly independent vectors that, in a linear combination, can represent every vector in a given vector space or free module, or, more simply put, which define a "coordinate system"...

 of V has been chosen.

Suppose now W is a T invariant subspace. Pick a basis C = {v1, ..., vk} of W and complete it to a basis B of V. Then, with respect to this basis, the matrix representation of T takes the form:


where the upper-left block T11 is the restriction of T to W.

In other words, given an invariant subspace W of T, V can be decomposed into the direct sum


Viewing T as an operator matrix


it is clear that T21: WW' must be zero.

Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. The projection operator P onto W, is defined by
P(w + w' ) = w, where wW and w' W' . The projection P has matrix representation


A straightforward calculation shows that W = Ran P, the range of P, is invariant under T if and only of PTP = TP. In other words, a subspace W being an element of Lat(T) is equivalent to the corresponding projection satisfying the relation PTP = TP.

If P is a projection (i.e. P2 = P), so is 1 - P, where 1 is the identity operator. It follows from the above that TP = PT if and only if both Ran P and Ran (1 - P) are invariant under T. In that case, T has matrix representation


Colloquially, a projection that commutes with T "diagonalizes" T.

Invariant subspace problem



The invariant subspace problem concerns the case where V is a separable Hilbert space
Hilbert space
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions...

 over the complex number
Complex number
A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part...

s, of dimension > 1, and T is a bounded operator
Bounded operator
In functional analysis, a branch of mathematics, a bounded linear operator is a linear transformation L between normed vector spaces X and Y for which the ratio of the norm of L to that of v is bounded by the same number, over all non-zero vectors v in X...

. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. This problem is unsolved .

In the more general case where V is hypothesized to be a Banach space
Banach space
In mathematics, Banach spaces is the name for complete normed vector spaces, one of the central objects of study in functional analysis. A complete normed vector space is a vector space V with a norm ||·|| such that every Cauchy sequence in V has a limit in V In mathematics, Banach spaces is the...

, there is an example of an operator without an invariant subspace due to Per Enflo
Per Enflo
Per H. Enflo is a mathematician who has solved fundamental problems in functional analysis. Three of these problems had been open for more than forty years:* The basis problem and the approximation problem and later...

 (1976). A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read
Charles Read (mathematician)
Charles Read is a British mathematician known for his work in functional analysis. In operator theory, Read is best known for his work in the 1980s on the invariant subspace problem, where he constructed operators with only trivial invariant subspaces on particular Banach spaces, especially on...

.

Invariant-subspace lattice

Given a nonempty Σ ⊂ L(V), the invariant subspaces invariant under each element of Σ form a lattice
Lattice (order)
In mathematics, a lattice is a partially ordered set in which any two elements have a unique supremum and an infimum . Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities...

, sometimes called the invariant-subspace lattice of Σ and denoted by Lat(Σ).

The lattice operations are defined in a natural way: for Σ' ⊂ Σ, the meet operation is defined by


while the join operation is


A minimal element in Lat(Σ) in said to be a minimal invariant subspace.

Fundamental theorem of noncommutative algebra

Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite dimensional complex vector space has a nontrivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains nontrivial elements for certain Σ.

Theorem (Burnside) Assume V is a complex vector space of finite dimension. For every proper subalgebra Σ of L(V), Lat(Σ) contain a nontrivial element.

Burnside's theorem is of fundamental importance in linear algebra. One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized.

A nonempty Σ ⊂ L(V) is said to be triangularizable if there exists a basis {e1...en} of V such that


In other words, Σ is triangularizable if there exists a basis such that every element of Σ has an upper-triangular matrix representation in that basis. It follows from Burnside's theorem that every commutative algebra Σ in L(V) is triangularizable. Hence every commuting family in L(V) can be simultaneously upper-triangularized.

Left ideals

If A is an algebra, one can define a left regular representation
Regular representation
In mathematics, and in particular the theory of group representations, the regular representation of a group G is the linear representation afforded by the group action of G on itself by translation....

 Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A

The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M.

If M is a left ideal of A. Consider the quotient vector space A/M. The left regular representation Φ on M now descends to a representation Φ' on A/M. If [b] denotes an equivalence class in A/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {aA| abM for all b}.

The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace VA/M is an invariant under {Φ'(a)| aA} if and only if its preimage under the quotient map, V + M, is a left ideal in A.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK