TRIQS/nda 1.3.0
Multi-dimensional array library for C++
|
In this example, we show how do linear algebra with nda arrays.
All the following code snippets are part of the same main
function:
Before showing some linear algebra related operations, let us first introduce the nda::matrix and nda::vector types and highlight their similarities and differences with respect to nda::array.
As already mentioned in the quick introduction Doing linear algebra with arrays, nda provides an nda::matrix and an nda::vector type. Both are specializations of the general nda::basic_array with the following features:
'M'
algebra.'V'
algebra.Their algebras determine how matrices and vectors behave in certain operations and expressions.
Otherwise, everything that is true for nda::basic_array objects is also true for nda::matrix and nda::vector objects.
To construct a matrix or a vector, we can use the same methods as for nda::array objects. We refer the user to Example 2: Constructing arrays for more information.
For example, to construct a matrix and a vector of a certain shape, we can do
Here, a \( 4 \times 3 \) matrix and a vector of size \( 3 \) are created.
Let us note that there are some Factories and transformations that are specific to matrices:
Output:
nda::eye constructs an identity matrix of a certain size and nda::diag takes a 1-dimensional array and constructs a square diagonal matrix containing the values of the given array.
Again, initializing and assigning to matrices and vectors works (almost) exactly in the same way as it does for arrays (see Example 3: Initializing arrays)
The main difference occurs, when we are assigning a scalar. While for arrays and vectors the assignment is done element-wise, for matrices the scalar is assigned only to the elements on the shorter diagonal and the rest is zeroed out:
Output:
There are some Factories and transformations for views that are specific to matrices and vectors, otherwise everything mentioned in Example 4: Views and slices still applies:
Output:
We refer the user to the examples
There are no mentionable differences between arrays, matrices and vectors regarding those features.
Here the algebra of the involved types becomes important. In section Performing arithmetic operations, we have shortly introduced how arithmetic operations are implemented in nda in terms of lazy expressions and how the algebra of the operands influences the results.
Let us be more complete in this section. Suppose we have the following objects:
O1
, O2
, ...A1
, A2
, ...M1
, M2
, ...v1
, v2
, ...s1
, s2
, ...Then the following operations are allowed (all operations are lazy unless mentioned otherwise)
O1 +/- O2
: element-wise addition/subtraction, shapes of O1
and O2
have to be the same, result has the same shapes1 +/- A1
/A1 +/- s1
: element-wise addition/subtraction, result has the same shape as A1
s1 +/- v1
/v1 +/- s1
: element-wise addition/subtraction, result has the same shape as v1
s1 +/- M1
/M1 +/- s1
: scalar is added/subtracted to/from the elements on the shorter diagonal of M1
, result has the same shape as M1
A1 * A2
: element-wise multiplication, shapes of A1
and A2
have to be the same, result has the same shapeM1 * M2
: non-lazy matrix-matrix multiplication, shapes have the expected requirements for matrix multiplication, calls nda::matmulM1 * v1
: non-lazy matrix-vector multiplication, shapes have the expected requirements for matrix multiplication, calls nda::matvecmuls1 * O1
/O1 * s1
: element-wise multiplication, result has the same shape as O1
A1 / A2
: element-wise division, shapes of A1
and A2
have to be the same, result has the same shapeM1 / M2
: non-lazy matrix-matrix multiplication of M1
by the inverse of M2
, shapes have the expected requirements for matrix multiplication with the additional requirement that M2
is square (since M2
is inverted), result is also square with the same sizeO1 / s1
: element-wise division, result has the same shape as O1
s1 / A1
: element-wise division, result has the same shape as A1
(note this is != A1 / s1
)s1 / v1
: element-wise division, result has the same shape as v1
(note this is != v1 / s1
)s1 / M1
: multiplies (lazy) the scalar with the inverse (non-lazy) of M1
, only square matrices are allowed (since M1
is inverted), result is also square with the same sizeIn addition to the basic matrix-matrix and matrix-vector multiplications described above, nda provides some useful Linear algebra tools. While some of them have a custom implementation, e.g. nda::linalg::cross_product, most make use of the BLAS interface and LAPACK interface to call more optimized routines.
Let us demonstrate some of its features:
Output:
Here, we first create two 3-dimensional vectors and then take their cross product.
To check that the cross product is perpendicular to v1
and v2
, we can use the dot product:
Output:
The cross product can also be expressed as the matrix-vector product of a skew-symmetric matrix and one of the vectors:
\[ \mathbf{a} \times \mathbf{b} = \begin{bmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{bmatrix} \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} \]
Output:
Comparing this result to v3
above, we see that this is indeed correct.
Let us now turn to an eigenvalue problem. nda offers the convenience functions nda::linalg::eigenelements and nda::linalg::eigenvalues to obtain the eigenvalues and eigenvectors of a symmetric or hermitian matrix.
We start from the following symmetric matrix:
Output:
Getting the eigenvalues and eigenvectors is quite easy:
Output:
To check the correctness of our calculation, we us the fact that the matrix \( \mathbf{M}_1 \) can be factorized as
\[ \mathbf{M}_1 = \mathbf{Q} \mathbf{\Sigma} \mathbf{Q}^{-1} \; , \]
where \( \mathbf{Q} \) is the matrix with the eigenvectors in its columns and \( \mathbf{\Sigma} \) is the diagonal matrix of eigenvalues:
Output:
While the functions in Linear algebra tools offer a very user-friendly experience, BLAS interface and LAPACK interface are more low-level and usually require more input from the users.
Let us show how to use the LAPACK interface to solve a system of linear equations. The system we want to solve is \( \mathbf{A}_1 \mathbf{x}_1 = \mathbf{b}_1 \) with
Output:
First, we perform an LU factorization using nda::lapack::getrf:
Here, ipiv
is a pivot vector that is used internally by the LAPACK routine. Because nda::lapack::getrf overwrites the input matrix with the result, we first copy the original matrix A1
into a new one LU
.
Then we solve the system by calling nda::lapack::getrs with the LU factorization, the right hand side vector and the pivot vector:
Output:
Since b1
is a vector but nda::lapack::getrs expects a matrix, we first copy it into the matrix x1
. After the LAPACK call return, x1
contains the result in its first column.
Let's check that this is actually the solution to our original system of equations:
Output:
Considering the finite precision of our calculations, this is indeed equal to the right hand side vector b1
.
Note: BLAS and LAPACK assume Fortran-ordered arrays. We therefore recommend to work with matrices in the nda::F_layout to avoid any confusion.