Integral and Discrete Transforms in Image Processing Singular Value Decomposition and Independent Component Analysis David Svoboda email: svoboda@fi.muni.cz Centre for Biomedical Image Analysis Faculty of Informatics, Masaryk University, Brno, CZ October 10, 2022 David Svoboda (CBIA@FI) Image Transforms autumn 2022 1 / 22 Outline 1 Singular Value Decomposition 2 Independent Component Analysis (ICA) David Svoboda (CBIA@FI) Image Transforms autumn 2022 2 / 22 1 Singular Value Decomposition 2 Independent Component Analysis (ICA) David Svoboda (CBIA@FI) Image Transforms autumn 2022 3 / 22 Singular Value Decomposition (SVD) Motivation Can we decompose the image into more & less important parts? Each image is understood as a matrix. Can we factorize such matrix into: 2 transform matrices and 1 matrix with (singular) values on its diagonal? = David Svoboda (CBIA@FI) Image Transforms autumn 2022 4 / 22 Singular Value Decomposition (SVD) Definition Any matrix A ∈ Rm×n can be rewritten in the form A = UΣV T where U ∈ Rm×m . . . orthonormal matrix Σ ∈ Rm×n . . . sparse matrix with nonzero values on diagonal only V ∈ Rn×n . . . orthonormal matrix = David Svoboda (CBIA@FI) Image Transforms autumn 2022 5 / 22 Singular Value Decomposition (SVD) Background Provided A = UΣV T and U, V are supposed to be orthonormal, and Σ diagonal one, we can derive: AT A = UΣV T T UΣV T = V ΣUT UΣV T /UT U = I/ = V Σ2 V T Eigen decomposition brings: V . . . matrix of eigenvectors of AT A Σ2 . . . diagonal matrix with eigenvalues (ordered by importance) David Svoboda (CBIA@FI) Image Transforms autumn 2022 6 / 22 Singular Value Decomposition (SVD) Background V T Rows of matrix V T are composed of normalized eigenvectors of AT A. Σ Diagonal matrix Σ is formed by square roots of eigenvalues of AT A. U Matrix U can be simply derived (V −1 = V T ): A = UΣV T ⇒ U = AΣ−1V T −1 = AΣ−1V David Svoboda (CBIA@FI) Image Transforms autumn 2022 7 / 22 Singular Value Decomposition (SVD) An example Let A ∈ R2×3 be a matrix defined as follows: A = 3 2 1 2 1 4 First, we need to form AT A, which is positive definite, i.e. all the eigenvalues are real and positive: AT A =   3 2 2 1 1 4   3 2 1 2 1 4 =   13 8 11 8 5 6 11 6 17   . David Svoboda (CBIA@FI) Image Transforms autumn 2022 8 / 22 Singular Value Decomposition (SVD) An example (cont’d) The eigenvalues and normalized eigenvectors of AT A are, respectively: λ1 = 30 λ2 = 5 λ3 = 0 e1 = − 17 5 √ 30 , − 10 5 √ 30 , 19 5 √ 30 e2 = − 6 5 √ 5 , − 5 5 √ 5 , 8 5 √ 5 e3 = 7 5 √ 6 , − 2 √ 6 , − 1 5 √ 6 Hence V T = [e1, e2, e3] Σ = √ 30 0 0 0 √ 5 0 Notice: Zeros eigenvalues are omitted. David Svoboda (CBIA@FI) Image Transforms autumn 2022 9 / 22 Singular Value Decomposition (SVD) An example (cont’d) Last step is to get matrix U: U = AΣ−1 V Due to diagonal nature of matrix Σ, we can do: ui = 1 √ λi Aei Therefore U = [u1u2] = 3 5 4 5 4 5 −3 5 Finally: A = 3 5 4 5 4 5 −3 5 √ 30 0 0 0 √ 5 0    − 17 5 √ 30 − 10 5 √ 30 19 5 √ 30 − 6 5 √ 5 − 5 5 √ 5 8 5 √ 5 7 5 √ 6 − 2√ 6 − 1 5 √ 6    David Svoboda (CBIA@FI) Image Transforms autumn 2022 10 / 22 Singular Value Decomposition (SVD) Interpretation (Geometrical meaning in 2D) Matrix A can be understood as a composition of several basic linear transforms: A Σ UA = UΣV T V source: wikipedia.org David Svoboda (CBIA@FI) Image Transforms autumn 2022 11 / 22 Singular Value Decomposition (SVD) Interpretation (for higher dimensions) Let A be a flattened database of faces: A = face1 face2 face3 . . . facen The SVD of A into UΣV T gives us the eigenfaces as columns in U, ordered by importance. ⇒ 0th eigenvector David Svoboda (CBIA@FI) Image Transforms autumn 2022 12 / 22 Singular Value Decomposition (SVD) Application in Image Processing – Compression An arbitrary image (stored in matrix A) can be 1 decomposed A → UΣV T 2 some of singular values from Σ can be eliminated 3 composed A ← UΣV T n = 1 n = 10 n = 30 n = 60 n = 100 original image (n = 512) n...numberofsingularvalues David Svoboda (CBIA@FI) Image Transforms autumn 2022 13 / 22 1 Singular Value Decomposition 2 Independent Component Analysis (ICA) David Svoboda (CBIA@FI) Image Transforms autumn 2022 14 / 22 Independent Component Analysis (ICA) Motivation Cocktail party problem source: https://vocal.com/blind-signal-separation/independent-component-analysis/ Latent components: Person 1 . . . signal s1(t) Person 2 . . . signal s2(t) Measured components: Mixture 1 . . . signal x1(t) Mixture 2 . . . signal x2(t) David Svoboda (CBIA@FI) Image Transforms autumn 2022 15 / 22 Independent Component Analysis (ICA) Problem formulation Definition Let X = x1(t) x2(t) S = s1(t) s2(t) then X = AS, where A is an unknown mixing matrix: A = a11 a12 a21 a22 ⇒ x1(t) = a11s1(t) + a12s2(t) x2(t) = a21s1(t) + a22s2(t) We search for an unmixing matrix W ≈ A−1: S = WX ICA can be used only if s1, s2 are independent s1, s2 do not follow Gaussian distribution David Svoboda (CBIA@FI) Image Transforms autumn 2022 16 / 22 Independent Component Analysis (ICA) Step by step 1 Preprocessing the data matrix X whitening the input (measured) data 2 FastICA algorithm rotation of the data to find the projection that optimizes non-Gaussianity of the source data ⇒ getting the projection matrix W 3 Estimating the original data S = WX Notice: FastICA is one among many methods (InfoMAX, Projection pursuit, kurtosis, maximum likelihood) that solve the problem of ICA. David Svoboda (CBIA@FI) Image Transforms autumn 2022 17 / 22 Independent Component Analysis (ICA) Whitening The objective is to normalize the data 1 Center the data (shifting by mean) 2 Uncorrelate the data (apply PCA) 3 Equalize the deviations (divide the i-th component by square root of its eigenvalue √ σi source: https://pantelis.github.io/cs677 David Svoboda (CBIA@FI) Image Transforms autumn 2022 18 / 22 Independent Component Analysis (ICA) FastICA algorithm Inputs: c . . . amount of latent components X ∈ Rn×m . . . matrix representing the whitened measured signals g(u) = tanh(u) Outputs: W ∈ Rn×c . . . unmixing matrix W ≈ A−1 S ∈ Rc×m . . . matrix of estimated original components 1: for p = 1 . . . c do 2: wp ← random vector of length n 3: while wp converges do 4: wp ← E{xg(wT p x)} − E{g′ (wT p x)}wp ▷ Newton iteration 5: wp ← wp − p−1 j=1 (wT p wj )wj ▷ orthogonalization 6: wp ← wp ||wp|| ▷ normalization 7: end while 8: end for 9: W ← [w1, . . . , wc ] ▷ build unmixing matrix 10: S ← W T X ▷ estimate original components David Svoboda (CBIA@FI) Image Transforms autumn 2022 19 / 22 Conclusion Let’s inspect the mutual relationship of: Principal component analysis (PCA) . . . decorrelation Singular Value Decomposition (SVD) . . . generalization of PCA Independent Component Analysis (ICA) . . . independence source: www.tu-chemnitz.de David Svoboda (CBIA@FI) Image Transforms autumn 2022 20 / 22 Bibliography Compton E. A., Ernstberger S. L. Singular Value Decomposition: Application to Image Processing, Journal of Undergraduate Research, Volume 17, 2020 Hyv¨arinen A, Oja E. Independent component analysis: algorithms and applications. Neural Networks: the Official Journal of the International Neural Network Society. 2000 May-Jun;13(4-5):411-430. YouTube lectures by Steven L. Brunton (https://youtu.be/gXbThCXjZFM) David Svoboda (CBIA@FI) Image Transforms autumn 2022 21 / 22 You should know the answers . . . Explain the purpose of using SVD. What are the main properties of matrices U and V ? What is does singular value mean? Can PCA fail to find proper components? Support your claim. Give an example how ICA can be applied to image/signal processing What does whitening mean? David Svoboda (CBIA@FI) Image Transforms autumn 2022 22 / 22