Here I rewrite the recipe of the KLIP PSF-subtraction algorithm by Soummer & Pueyo & Larkin (2012) in vector form — less of the confusing indices and easier to code:
- Assemble the reference images into matrix , containing K rows, each one a reference image vector of length N pixels. Subtract the mean of each image vector (row) from itself.
- Get the eigendecomposition of . This yields the ranked eigenvalue vector , and corresponding eigenvectors as rows of the KxK matrix .
- Divide each (kth) row of by its singular value, , call this new matrix .
- The principle component basis, (dim. KxN), is formed by a transformation of (S&P&L eq 5): . The kth row of is the kth principal component of the reference data matrix .
- Choose principal components to retain. The truncated basis, (with rows) is the new K-L basis for describing the PSF.
- To form the PSF estimate, , project the target row vector (dim. 1xN) onto the K-L basis (S&P&L eq 8): . This last step is by far the most time- and memory-consuming part, because of the large NxN operand.
and here’s a python implementation:
def get_klip_basis(R, cutoff): w, V = np.linalg.eig(np.dot(R, np.transpose(R))) sort_ind = np.argsort(w)[::-1] #indices of eigenvals sorted in descending order sv = np.sqrt(w[sort_ind]).reshape(-1,1) #column of ranked singular values Z = np.dot(1./sv*np.transpose(V[:, sort_ind]), R) return Z[0:cutoff, :], sv