DMD¶
For people already familiar with the method and interface, the API docs.
Here we present the basic ideas behind a family of dynamic mode decomposition (DMD) methods. In particular the original “standard DMD” [1], its variant “exact DMD” [2]. The principle idea is that DMD should be able to characterize / decompose potentially highly nonlinear dynamics by solving linear systems of equations and studying their spectral properties.
Considering a sequential set of data vectors with and assuming that
for some usually unknown matrix , one can define standard DMD as follows (following the presentation of [2], originally introduced as DMD in [1] ).
The data is arranged into matrices of matching pairs of time-shifted datapoints, i.e., for .
Compute the (optionally truncated) SVD of to obtain .
Define .
Compute eigenvalues and eigenvectors .
The DMD mode corresponding to the DMD eigenvalue is given by .
It should be remarked that even if the data-generating dynamics are nonlinear, it is assumed that there exists some linear operator which approximates these.
The notion of “exact” DMD [2] weakens the assumptions and only requires that the data vectors are corresponding pairs of data. The operator is defined as the solution to the best-approximation of . This leads to exact DMD modes corresponding to eigenvalue of the form
Both estimation modes are united within the same estimator in Deeptime:
[1]:
from deeptime.decomposition import DMD
standard_dmd = DMD(mode='standard')
exact_dmd = DMD(mode='exact')
The standard DMD is sometimes also referred to as projected DMD [2] since the modes are eigenvectors of , where is the orthogonal projection onto the image of . If data vectors are in the , then ; projected/standard and exact DMD modes coincide.
This can be especially relevant if there are more dimensions than datapoints.
We demonstrate the methods using a random orthogonal matrix which serves as out propagator plus some normal distributed noise, i.e.,
[2]:
import numpy as np
state = np.random.RandomState(45)
A = np.linalg.qr(state.normal(size=(2, 2)))[0]
x0 = state.uniform(-1, 1, size=(A.shape[0],))
data = np.empty((500, A.shape[0]))
data[0] = x0
for t in range(1, len(data)):
data[t] = A @ data[t - 1] + state.normal(scale=1e-4, size=(2,))
Visualizing the generated data we can see that it is a jumping process between two wells.
[3]:
import matplotlib.pyplot as plt
plt.scatter(*data.T);

Fitting the models to this dataset reveals that the eigenvalues are identical:
[4]:
standard_model = standard_dmd.fit((data[:-1], data[1:])).fetch_model()
exact_model = exact_dmd.fit((data[:-1], data[1:])).fetch_model()
[5]:
print(f"Eigenvalues standard {standard_model.eigenvalues}, exact {exact_model.eigenvalues}")
Eigenvalues standard [ 1.00003048+0.j -1.00015445+0.j], exact [ 1.00003048+0.j -1.00015445+0.j]
We propagate the first ten datapoints using our estimated models:
[6]:
traj_standard = standard_model.transform(data)
traj_exact = exact_model.transform(data)
A scatter plot reveals that all data points get mapped to the same two wells and the propagator was approximated almost perfectly.
[7]:
plt.scatter(*data.T, alpha=.5, marker='x', label='ground truth')
plt.scatter(*np.real(traj_standard)[::5].T, marker='*',alpha=.5, label='standard DMD')
plt.scatter(*np.real(traj_exact)[::5].T, alpha=.1, label='exact DMD')
plt.legend();

References