Classes
Matrix functions module
Unsupported modules

This module aims to provide various methods for the computation of matrix functions. More...

Classes

class  MatrixExponential< MatrixType >
 Class for computing the matrix exponential. More...
struct  MatrixExponentialReturnValue< Derived >
 Proxy for the matrix exponential of some matrix (expression). More...
class  MatrixFunction< MatrixType, AtomicType, IsComplex >
 Class for computing matrix functions. More...
class  MatrixFunctionAtomic< MatrixType >
 Helper class for computing matrix functions of atomic matrices. More...
class  MatrixFunctionReturnValue< Derived >
 Proxy for the matrix function of some matrix (expression). More...
class  MatrixLogarithmAtomic< MatrixType >
 Helper class for computing matrix logarithm of atomic matrices. More...
class  MatrixLogarithmReturnValue< Derived >
 Proxy for the matrix logarithm of some matrix (expression). More...
class  MatrixSquareRoot< MatrixType, IsComplex >
 Class for computing matrix square roots of general matrices. More...
class  MatrixSquareRootQuasiTriangular< MatrixType >
 Class for computing matrix square roots of upper quasi-triangular matrices. More...
class  MatrixSquareRootReturnValue< Derived >
 Proxy for the matrix square root of some matrix (expression). More...
class  MatrixSquareRootTriangular< MatrixType >
 Class for computing matrix square roots of upper triangular matrices. More...
class  StdStemFunctions< Scalar >
 Stem functions corresponding to standard mathematical functions. More...

Detailed Description

This module aims to provide various methods for the computation of matrix functions.

To use this module, add

#include <unsupported/Eigen/MatrixFunctions>

at the start of your source file.

This module defines the following MatrixBase methods.

These methods are the main entry points to this module.

Matrix functions are defined as follows. Suppose that $ f $ is an entire function (that is, a function on the complex plane that is everywhere complex differentiable). Then its Taylor series

\[ f(0) + f'(0) x + \frac{f''(0)}{2} x^2 + \frac{f'''(0)}{3!} x^3 + \cdots \]

converges to $ f(x) $. In this case, we can define the matrix function by the same series:

\[ f(M) = f(0) + f'(0) M + \frac{f''(0)}{2} M^2 + \frac{f'''(0)}{3!} M^3 + \cdots \]

MatrixBase methods defined in the MatrixFunctions module

The remainder of the page documents the following MatrixBase methods which are defined in the MatrixFunctions module.

MatrixBase::cos()

Compute the matrix cosine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::cos() const
Parameters:
[in]Ma square matrix.
Returns:
expression representing $ \cos(M) $.

This function calls matrixFunction() with StdStemFunctions::cos().

See also:
sin() for an example.

MatrixBase::cosh()

Compute the matrix hyberbolic cosine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::cosh() const
Parameters:
[in]Ma square matrix.
Returns:
expression representing $ \cosh(M) $

This function calls matrixFunction() with StdStemFunctions::cosh().

See also:
sinh() for an example.

MatrixBase::exp()

Compute the matrix exponential.

const MatrixExponentialReturnValue<Derived> MatrixBase<Derived>::exp() const
Parameters:
[in]Mmatrix whose exponential is to be computed.
Returns:
expression representing the matrix exponential of M.

The matrix exponential of $ M $ is defined by

\[ \exp(M) = \sum_{k=0}^\infty \frac{M^k}{k!}. \]

The matrix exponential can be used to solve linear ordinary differential equations: the solution of $ y' = My $ with the initial condition $ y(0) = y_0 $ is given by $ y(t) = \exp(M) y_0 $.

The cost of the computation is approximately $ 20 n^3 $ for matrices of size $ n $. The number 20 depends weakly on the norm of the matrix.

The matrix exponential is computed using the scaling-and-squaring method combined with Padé approximation. The matrix is first rescaled, then the exponential of the reduced matrix is computed approximant, and then the rescaling is undone by repeated squaring. The degree of the Padé approximant is chosen such that the approximation error is less than the round-off error. However, errors may accumulate during the squaring phase.

Details of the algorithm can be found in: Nicholas J. Higham, "The scaling and squaring method for the matrix exponential revisited," SIAM J. Matrix Anal. Applic., 26:1179–1193, 2005.

Example: The following program checks that

\[ \exp \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] = \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right]. \]

This corresponds to a rotation of $ \frac14\pi $ radians around the z-axis.

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(3,3);
A << 0, -pi/4, 0,
pi/4, 0, 0,
0, 0, 0;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix exponential of A is:\n" << A.exp() << "\n\n";
}

Output:

The matrix A is:
        0 -0.785398         0
 0.785398         0         0
        0         0         0

The matrix exponential of A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

Note:
M has to be a matrix of float, double, long double complex<float>, complex<double>, or complex<long double> .

MatrixBase::log()

Compute the matrix logarithm.

const MatrixLogarithmReturnValue<Derived> MatrixBase<Derived>::log() const
Parameters:
[in]Minvertible matrix whose logarithm is to be computed.
Returns:
expression representing the matrix logarithm root of M.

The matrix logarithm of $ M $ is a matrix $ X $ such that $ \exp(X) = M $ where exp denotes the matrix exponential. As for the scalar logarithm, the equation $ \exp(X) = M $ may have multiple solutions; this function returns a matrix whose eigenvalues have imaginary part in the interval $ (-\pi,\pi] $.

In the real case, the matrix $ M $ should be invertible and it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed). In the complex case, it only needs to be invertible.

This function computes the matrix logarithm using the Schur-Parlett algorithm as implemented by MatrixBase::matrixFunction(). The logarithm of an atomic block is computed by MatrixLogarithmAtomic, which uses direct computation for 1-by-1 and 2-by-2 blocks and an inverse scaling-and-squaring algorithm for bigger blocks, with the square roots computed by MatrixBase::sqrt().

Details of the algorithm can be found in Section 11.6.2 of: Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. ISBN 978-0-898716-46-7.

Example: The following program checks that

\[ \log \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right] = \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]. \]

This corresponds to a rotation of $ \frac14\pi $ radians around the z-axis. This is the inverse of the example used in the documentation of exp().

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
int main()
{
using std::sqrt;
MatrixXd A(3,3);
A << 0.5*sqrt(2), -0.5*sqrt(2), 0,
0.5*sqrt(2), 0.5*sqrt(2), 0,
0, 0, 1;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix logarithm of A is:\n" << A.log() << "\n";
}

Output:

The matrix A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

The matrix logarithm of A is:
-1.11022e-16    -0.785398            0
    0.785398 -1.11022e-16            0
           0            0            0
Note:
M has to be a matrix of float, double, long double complex<float>, complex<double>, or complex<long double> .
See also:
MatrixBase::exp(), MatrixBase::matrixFunction(), class MatrixLogarithmAtomic, MatrixBase::sqrt().

MatrixBase::matrixFunction()

Compute a matrix function.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::matrixFunction(typename internal::stem_function<typename internal::traits<Derived>::Scalar>::type f) const
Parameters:
[in]Margument of matrix function, should be a square matrix.
[in]fan entire function; f(x,n) should compute the n-th derivative of f at x.
Returns:
expression representing f applied to M.

Suppose that M is a matrix whose entries have type Scalar. Then, the second argument, f, should be a function with prototype

ComplexScalar f(ComplexScalar, int)

where ComplexScalar = std::complex<Scalar> if Scalar is real (e.g., float or double) and ComplexScalar = Scalar if Scalar is complex. The return value of f(x,n) should be $ f^{(n)}(x) $, the n-th derivative of f at x.

This routine uses the algorithm described in: Philip Davies and Nicholas J. Higham, "A Schur-Parlett algorithm for computing matrix functions", SIAM J. Matrix Anal. Applic., 25:464–485, 2003.

The actual work is done by the MatrixFunction class.

Example: The following program checks that

\[ \exp \left[ \begin{array}{ccc} 0 & \frac14\pi & 0 \\ -\frac14\pi & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] = \left[ \begin{array}{ccc} \frac12\sqrt2 & -\frac12\sqrt2 & 0 \\ \frac12\sqrt2 & \frac12\sqrt2 & 0 \\ 0 & 0 & 1 \end{array} \right]. \]

This corresponds to a rotation of $ \frac14\pi $ radians around the z-axis. This is the same example as used in the documentation of exp().

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
std::complex<double> expfn(std::complex<double> x, int)
{
return std::exp(x);
}
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(3,3);
A << 0, -pi/4, 0,
pi/4, 0, 0,
0, 0, 0;
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix exponential of A is:\n"
<< A.matrixFunction(expfn) << "\n\n";
}

Output:

The matrix A is:
        0 -0.785398         0
 0.785398         0         0
        0         0         0

The matrix exponential of A is:
 0.707107 -0.707107         0
 0.707107  0.707107         0
        0         0         1

Note that the function expfn is defined for complex numbers x, even though the matrix A is over the reals. Instead of expfn, we could also have used StdStemFunctions::exp:

A.matrixFunction(StdStemFunctions<std::complex<double> >::exp, &B);

MatrixBase::sin()

Compute the matrix sine.

const MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::sin() const
Parameters:
[in]Ma square matrix.
Returns:
expression representing $ \sin(M) $.

This function calls matrixFunction() with StdStemFunctions::sin().

Example:

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
int main()
{
MatrixXd A = MatrixXd::Random(3,3);
std::cout << "A = \n" << A << "\n\n";
MatrixXd sinA = A.sin();
std::cout << "sin(A) = \n" << sinA << "\n\n";
MatrixXd cosA = A.cos();
std::cout << "cos(A) = \n" << cosA << "\n\n";
// The matrix functions satisfy sin^2(A) + cos^2(A) = I,
// like the scalar functions.
std::cout << "sin^2(A) + cos^2(A) = \n" << sinA*sinA + cosA*cosA << "\n\n";
}

Output:

A = 
 0.680375   0.59688 -0.329554
-0.211234  0.823295  0.536459
 0.566198 -0.604897 -0.444451

sin(A) = 
 0.679919    0.4579 -0.400612
-0.227278  0.821913    0.5358
 0.570141 -0.676728 -0.462398

cos(A) = 
 0.927728 -0.530361 -0.110482
0.00969246  0.889022 -0.137604
-0.132574  -0.04289   1.16475

sin^2(A) + cos^2(A) = 
           1  5.55112e-16  6.38378e-16
 7.63278e-16            1  3.33067e-16
-1.66533e-16 -5.89806e-16            1

MatrixBase::sinh()

Compute the matrix hyperbolic sine.

MatrixFunctionReturnValue<Derived> MatrixBase<Derived>::sinh() const
Parameters:
[in]Ma square matrix.
Returns:
expression representing $ \sinh(M) $

This function calls matrixFunction() with StdStemFunctions::sinh().

Example:

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
int main()
{
MatrixXf A = MatrixXf::Random(3,3);
std::cout << "A = \n" << A << "\n\n";
MatrixXf sinhA = A.sinh();
std::cout << "sinh(A) = \n" << sinhA << "\n\n";
MatrixXf coshA = A.cosh();
std::cout << "cosh(A) = \n" << coshA << "\n\n";
// The matrix functions satisfy cosh^2(A) - sinh^2(A) = I,
// like the scalar functions.
std::cout << "cosh^2(A) - sinh^2(A) = \n" << coshA*coshA - sinhA*sinhA << "\n\n";
}

Output:

A = 
 0.680375   0.59688 -0.329554
-0.211234  0.823295  0.536459
 0.566198 -0.604897 -0.444451

sinh(A) = 
 0.682534  0.739989 -0.256871
-0.194928  0.826512  0.537546
 0.562584  -0.53163 -0.425199

cosh(A) = 
  1.07817  0.567068  0.132125
-0.0041862   1.11649  0.135361
 0.128891 0.0659989  0.851201

cosh^2(A) - sinh^2(A) = 
           1 -5.96046e-07 -2.98023e-08
-6.79865e-08            1 -2.98023e-08
 1.49012e-08 -3.42727e-07            1

MatrixBase::sqrt()

Compute the matrix square root.

const MatrixSquareRootReturnValue<Derived> MatrixBase<Derived>::sqrt() const
Parameters:
[in]Minvertible matrix whose square root is to be computed.
Returns:
expression representing the matrix square root of M.

The matrix square root of $ M $ is the matrix $ M^{1/2} $ whose square is the original matrix; so if $ S = M^{1/2} $ then $ S^2 = M $.

In the real case, the matrix $ M $ should be invertible and it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed). In that case, the matrix has a square root which is also real, and this is the square root computed by this function.

The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix can then be computed directly. The cost is approximately $ 25 n^3 $ real flops for the real Schur decomposition and $ 3\frac13 n^3 $ real flops for the remainder (though the computation time in practice is likely more than this indicates).

Details of the algorithm can be found in: Nicholas J. Highan, "Computing real square roots of a real matrix", Linear Algebra Appl., 88/89:405–430, 1987.

If the matrix is positive-definite symmetric, then the square root is also positive-definite symmetric. In this case, it is best to use SelfAdjointEigenSolver::operatorSqrt() to compute it.

In the complex case, the matrix $ M $ should be invertible; this is a restriction of the algorithm. The square root computed by this algorithm is the one whose eigenvalues have an argument in the interval $ (-\frac12\pi, \frac12\pi] $. This is the usual branch cut.

The computation is the same as in the real case, except that the complex Schur decomposition is used to reduce the matrix to a triangular matrix. The theoretical cost is the same. Details are in: Åke Björck and Sven Hammarling, "A Schur method for the square root of a matrix", Linear Algebra Appl., 52/53:127–140, 1983.

Example: The following program checks that the square root of

\[ \left[ \begin{array}{cc} \cos(\frac13\pi) & -\sin(\frac13\pi) \\ \sin(\frac13\pi) & \cos(\frac13\pi) \end{array} \right], \]

corresponding to a rotation over 60 degrees, is a rotation over 30 degrees:

\[ \left[ \begin{array}{cc} \cos(\frac16\pi) & -\sin(\frac16\pi) \\ \sin(\frac16\pi) & \cos(\frac16\pi) \end{array} \right]. \]

#include <unsupported/Eigen/MatrixFunctions>
#include <iostream>
using namespace Eigen;
int main()
{
const double pi = std::acos(-1.0);
MatrixXd A(2,2);
A << cos(pi/3), -sin(pi/3),
sin(pi/3), cos(pi/3);
std::cout << "The matrix A is:\n" << A << "\n\n";
std::cout << "The matrix square root of A is:\n" << A.sqrt() << "\n\n";
std::cout << "The square of the last matrix is:\n" << A.sqrt() * A.sqrt() << "\n";
}

Output:

The matrix A is:
      0.5 -0.866025
 0.866025       0.5

The matrix square root of A is:
0.866025     -0.5
     0.5 0.866025

The square of the last matrix is:
      0.5 -0.866025
 0.866025       0.5
See also:
class RealSchur, class ComplexSchur, class MatrixSquareRoot, SelfAdjointEigenSolver::operatorSqrt().