coding KLIP

Here I rewrite the recipe of the KLIP PSF-subtraction algorithm by Soummer & Pueyo & Larkin (2012) in vector form — less of the confusing indices and easier to code:

  • Assemble the reference images into matrix \mathbf{R}, containing K rows, each one a reference image vector of length N pixels. Subtract the mean of each image vector (row) from itself.
  • Get the eigendecomposition of \mathbf{R} \mathbf{R}^T. This yields the ranked eigenvalue vector \mathbf{\Lambda}, and corresponding eigenvectors as rows of the KxK matrix \mathbf{V}.
  • Divide each (kth) row of \mathbf{V} by its singular value, \sqrt{\lambda_k}, call this new matrix \mathbf{V'}.
  • The principle component basis, \mathbf{Z} (dim. KxN), is formed by a transformation of \mathbf{R} (S&P&L eq 5): \mathbf{Z} = \mathbf{V'R}. The kth row of \mathbf{Z} is the kth principal component of the reference data matrix \mathbf{R}.
  • Choose K_{\rm KLIP} principal components to retain. The truncated basis, \mathbf{Z}_{\rm KL} (with K_{\rm KLIP} \le K rows) is the new K-L basis for describing the PSF.
  • To form the PSF estimate, \mathbf{\hat{I}}, project the target row vector \mathbf{T} (dim. 1xN) onto the K-L basis (S&P&L eq 8): \mathbf{\hat{I}} = \mathbf{T} \mathbf{Z}_{\rm KL}^T \mathbf{Z}_{\rm KL}. This last step is by far the most time- and memory-consuming part, because of the large NxN operand.

and here’s a python implementation:

def get_klip_basis(R, cutoff):
    w, V = np.linalg.eig(np.dot(R, np.transpose(R)))
    sort_ind = np.argsort(w)[::-1] #indices of eigenvals sorted in descending order
    sv = np.sqrt(w[sort_ind]).reshape(-1,1) #column of ranked singular values
    Z = np.dot(1./sv*np.transpose(V[:, sort_ind]), R)
    return Z[0:cutoff, :], sv