In this example, we show how do linear algebra with nda arrays.
All the following code snippets are part of the same main function:
#include <complex>
#include <iostream>
int main(int argc, char *argv[]) {
}
Includes all relevant headers for the core nda library.
Before showing some linear algebra related operations, let us first introduce the nda::matrix and nda::vector types and highlight their similarities and differences with respect to nda::array.
Matrix and vector types
As already mentioned in the quick introduction Doing linear algebra with arrays, nda provides an nda::matrix and an nda::vector type. Both are specializations of the general nda::basic_array with the following features:
- nda::matrix is a 2-dimensional array that belongs to the 'M' algebra.
- nda::vector is a 1-dimensional array that belongs to the 'V' algebra.
Their algebras determine how matrices and vectors behave in certain operations and expressions.
Otherwise, everything that is true for nda::basic_array objects is also true for nda::matrix and nda::vector objects.
Constructing matrices and vectors
To construct a matrix or a vector, we can use the same methods as for nda::array objects. We refer the user to Example 2: Constructing arrays for more information.
For example, to construct a matrix and a vector of a certain shape, we can do
basic_array< ValueType, 1, C_layout, 'V', ContainerPolicy > vector
Alias template of an nda::basic_array with rank 1 and a 'V' algebra.
basic_array< ValueType, 2, Layout, 'M', ContainerPolicy > matrix
Alias template of an nda::basic_array with rank 2 and an 'M' algebra.
Here, a \( 4 \times 3 \) matrix and a vector of size \( 3 \) are created.
Let us note that there are some Factories and transformations that are specific to matrices:
std::cout << "I = " << I << std::endl;
std::cout << "D = " << D << std::endl;
auto eye(Int dim)
Create an identity nda::matrix with ones on the diagonal.
ArrayOfRank< 2 > auto diag(V const &v)
Get a new nda::matrix with the given values on the diagonal.
basic_array< ValueType, Rank, Layout, 'A', ContainerPolicy > array
Alias template of an nda::basic_array with an 'A' algebra.
Output:
I =
[[1,0,0]
[0,1,0]
[0,0,1]]
D =
[[1,0]
[0,2]]
nda::eye constructs an identity matrix of a certain size and nda::diag takes a 1-dimensional array and constructs a square diagonal matrix containing the values of the given array.
Initializing and assigning to matrices and vectors
Again, initializing and assigning to matrices and vectors works (almost) exactly in the same way as it does for arrays (see Example 3: Initializing arrays)
The main difference occurs, when we are assigning a scalar. While for arrays and vectors the assignment is done element-wise, for matrices the scalar is assigned only to the elements on the shorter diagonal and the rest is zeroed out:
M = 3.1415;
v = 2.7182;
std::cout << "M = " << M << std::endl;
std::cout << "v = " << v << std::endl;
Output:
M =
[[3.1415,0,0]
[0,3.1415,0]
[0,0,3.1415]
[0,0,0]]
v = [2.7182,2.7182,2.7182]
Views on matrices and vectors
There are some Factories and transformations for views that are specific to matrices and vectors, otherwise everything mentioned in Example 4: Views and slices still applies:
std::cout << "d = " << d << std::endl;
std::cout << "A_mv = " << A_mv << std::endl;
std::cout <<
"Algebra of A_mv: " <<
nda::get_algebra<
decltype(A_mv)> << std::endl;
auto make_matrix_view(basic_array< T, R, LP, A, CP > const &a)
Make an nda::matrix_view of a given nda::basic_array.
ArrayOfRank< 1 > auto diagonal(M &&m)
Get a view of the diagonal of a 2-dimensional array/view.
constexpr char get_algebra
Constexpr variable that specifies the algebra of a type.
Output:
d = [1,2]
Algebra of d: V
A_mv =
[[1,2]
[3,4]]
Algebra of A: A
Algebra of A_mv: M
HDF5, MPI and symmetry support for matrices and vectors
We refer the user to the examples
There are no mentionable differences between arrays, matrices and vectors regarding those features.
Arithmetic operations with matrices and vectors
Here the algebra of the involved types becomes important. In section Performing arithmetic operations, we have shortly introduced how arithmetic operations are implemented in nda in terms of lazy expressions and how the algebra of the operands influences the results.
Let us be more complete in this section. Suppose we have the following objects:
- nda::Array (any type that satisfies this concept): O1, O2, ...
- nda::array: A1, A2, ...
- nda::matrix: M1, M2, ...
- nda::vector: v1, v2, ...
- scalar: s1, s2, ...
Then the following operations are allowed (all operations are lazy unless mentioned otherwise)
- Addition / Subtraction
- O1 +/- O2: element-wise addition/subtraction, shapes of O1 and O2 have to be the same, result has the same shape
- s1 +/- A1/A1 +/- s1: element-wise addition/subtraction, result has the same shape as A1
- s1 +/- v1/v1 +/- s1: element-wise addition/subtraction, result has the same shape as v1
- s1 +/- M1/M1 +/- s1: scalar is added/subtracted to/from the elements on the shorter diagonal of M1, result has the same shape as M1
- Multiplication
- A1 * A2: element-wise multiplication, shapes of A1 and A2 have to be the same, result has the same shape
- M1 * M2: non-lazy matrix-matrix multiplication, shapes have the expected requirements for matrix multiplication, calls nda::matmul
- M1 * v1: non-lazy matrix-vector multiplication, shapes have the expected requirements for matrix multiplication, calls nda::matvecmul
- s1 * O1/O1 * s1: element-wise multiplication, result has the same shape as O1
- Division
- A1 / A2: element-wise division, shapes of A1 and A2 have to be the same, result has the same shape
- M1 / M2: non-lazy matrix-matrix multiplication of M1 by the inverse of M2, shapes have the expected requirements for matrix multiplication with the additional requirement that M2 is square (since M2 is inverted), result is also square with the same size
- O1 / s1: element-wise division, result has the same shape as O1
- s1 / A1: element-wise division, result has the same shape as A1 (note this is != A1 / s1)
- s1 / v1: element-wise division, result has the same shape as v1 (note this is != v1 / s1)
- s1 / M1: multiplies (lazy) the scalar with the inverse (non-lazy) of M1, only square matrices are allowed (since M1 is inverted), result is also square with the same size
Using linear algebra tools
In addition to the basic matrix-matrix and matrix-vector multiplications described above, nda provides some useful Linear algebra tools. While some of them have a custom implementation, e.g. nda::linalg::cross_product, most make use of the BLAS interface and LAPACK interface to call more optimized routines.
Let us demonstrate some of its features:
std::cout << "v1 = " << v1 << std::endl;
std::cout << "v2 = " << v2 << std::endl;
std::cout << "v3 = v1 x v2 = " << v3 << std::endl;
Output:
v1 = [1,2,3]
v2 = [4,5,6]
v3 = v1 x v2 = [-3,6,-3]
Here, we first create two 3-dimensional vectors and then take their cross product.
To check that the cross product is perpendicular to v1 and v2, we can use the dot product:
std::cout <<
"v1 . v3 = " <<
nda::dot(v1, v3) << std::endl;
std::cout <<
"v2 . v3 = " <<
nda::dot(v2, v3) << std::endl;
Output:
The cross product can also be expressed as the matrix-vector product of a skew-symmetric matrix and one of the vectors:
\[ \mathbf{a} \times \mathbf{b} =
\begin{bmatrix}
0 & -a_3 & a_2 \\
a_3 & 0 & -a_1 \\
-a_2 & a_1 & 0
\end{bmatrix}
\begin{bmatrix}
b_1 \\
b_2 \\
b_3
\end{bmatrix}
\]
std::cout << "M_v1 = " << M_v1 << std::endl;
auto v3_mv = M_v1 * v2;
std::cout << "v3_mv = " << v3_mv << std::endl;
Output:
M_v1 =
[[0,-3,2]
[3,0,-1]
[-2,1,0]]
v3_mv = [-3,6,-3]
Comparing this result to v3 above, we see that this is indeed correct.
Let us now turn to an eigenvalue problem. nda offers the convenience functions nda::linalg::eigenelements and nda::linalg::eigenvalues to obtain the eigenvalues and eigenvectors of a symmetric or hermitian matrix.
We start from the following symmetric matrix:
std::cout << "M1 = " << M1 << std::endl;
Output:
M1 =
[[4,-14,-12]
[-14,10,13]
[-12,13,1]]
Getting the eigenvalues and eigenvectors is quite easy:
std::cout << "Eigenvalues of M1: s = " << s << std::endl;
std::cout << "Eigenvectors of M1: Q = " << Q << std::endl;
Output:
Eigenvalues of M1: s = [-9.64366,-6.89203,31.5357]
Eigenvectors of M1: Q =
[[0.594746,-0.580953,0.555671]
[-0.103704,-0.740875,-0.663588]
[0.797197,0.337041,-0.500879]]
To check the correctness of our calculation, we us the fact that the matrix \( \mathbf{M}_1 \) can be factorized as
\[ \mathbf{M}_1 = \mathbf{Q} \mathbf{\Sigma} \mathbf{Q}^{-1} \; ,
\]
where \( \mathbf{Q} \) is the matrix with the eigenvectors in its columns and \( \mathbf{\Sigma} \) is the diagonal matrix of eigenvalues:
std::cout << "M1_reconstructed = " << M1_reconstructed << std::endl;
auto transpose(A &&a)
Transpose the memory layout of an nda::MemoryArray or an nda::expr_call.
Output:
M1_reconstructed =
[[4,-14,-12]
[-14,10,13]
[-12,13,1]]
Using the BLAS/LAPACK interface
While the functions in Linear algebra tools offer a very user-friendly experience, BLAS interface and LAPACK interface are more low-level and usually require more input from the users.
Let us show how to use the LAPACK interface to solve a system of linear equations. The system we want to solve is \( \mathbf{A}_1 \mathbf{x}_1 = \mathbf{b}_1 \) with
std::cout << "A1 = " << A1 << std::endl;
std::cout << "b1 = " << b1 << std::endl;
Output:
A1 =
[[3,2,-1]
[2,-2,4]
[-1,0.5,-1]]
b1 = [1,-2,0]
First, we perform an LU factorization using nda::lapack::getrf:
auto LU = A1;
if (info != 0) {
std::cerr << "Error: nda::lapack::getrf failed with error code " << info << std::endl;
return 1;
}
int getrf(A &&a, IPIV &&ipiv)
Interface to the LAPACK getrf routine.
Here, ipiv is a pivot vector that is used internally by the LAPACK routine. Because nda::lapack::getrf overwrites the input matrix with the result, we first copy the original matrix A1 into a new one LU.
Then we solve the system by calling nda::lapack::getrs with the LU factorization, the right hand side vector and the pivot vector:
x1(nda::range::all, 0) = b1;
if (info != 0) {
std::cerr << "Error: nda::lapack::getrs failed with error code " << info << std::endl;
return 1;
}
std::cout << "x1 = " << x1(nda::range::all, 0) << std::endl;
int getrs(A const &a, B &&b, IPIV const &ipiv)
Interface to the LAPACK getrs routine.
Output:
Since b1 is a vector but nda::lapack::getrs expects a matrix, we first copy it into the matrix x1. After the LAPACK call return, x1 contains the result in its first column.
Let's check that this is actually the solution to our original system of equations:
std::cout << "A1 * x1 = " << A1 * x1(nda::range::all, 0) << std::endl;
Output:
A1 * x1 = [1,-2,2.22045e-16]
Considering the finite precision of our calculations, this is indeed equal to the right hand side vector b1.
Note: BLAS and LAPACK assume Fortran-ordered arrays. We therefore recommend to work with matrices in the nda::F_layout to avoid any confusion.