Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Signal Processing Course : Sparse Regularization of Inverse Problems

Slides for a course on signal and image processing.

  • Identifiez-vous pour voir les commentaires

Signal Processing Course : Sparse Regularization of Inverse Problems

  1. 1. Sparse RegularizationOf Inverse Problems Gabriel Peyré www.numerical-tours.com
  2. 2. Overview• Inverse Problems Regularization• Sparse Synthesis Regularization• Examples: Sparse Wavelet Regularizations• Iterative Soft Thresholding• Sparse Seismic Deconvolution
  3. 3. Inverse ProblemsForward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP Input
  4. 4. Inverse ProblemsForward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP InputDenoising: K = IdQ , P = Q.
  5. 5. Inverse ProblemsForward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP InputDenoising: K = IdQ , P = Q.Inpainting: set of missing pixels, P = Q | |. 0 if x , (Kf )(x) = f (x) if x / . K
  6. 6. Inverse ProblemsForward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP InputDenoising: K = IdQ , P = Q.Inpainting: set of missing pixels, P = Q | |. 0 if x , (Kf )(x) = f (x) if x / .Super-resolution: Kf = (f k) , P = Q/ . K K
  7. 7. Inverse Problem in Medical Imaging Kf = (p k )1 k K
  8. 8. Inverse Problem in Medical Imaging Kf = (p k )1 k KMagnetic resonance imaging (MRI): ˆ Kf = (f ( )) ˆ f
  9. 9. Inverse Problem in Medical Imaging Kf = (p k )1 k KMagnetic resonance imaging (MRI): ˆ Kf = (f ( )) ˆ fOther examples: MEG, EEG, . . .
  10. 10. Inverse Problem RegularizationNoisy measurements: y = Kf0 + w.Prior model: J : RQ R assigns a score to images.
  11. 11. Inverse Problem RegularizationNoisy measurements: y = Kf0 + w.Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity Regularity
  12. 12. Inverse Problem RegularizationNoisy measurements: y = Kf0 + w.Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity RegularityChoice of : tradeo Noise level Regularity of f0 ||w|| J(f0 )
  13. 13. Inverse Problem RegularizationNoisy measurements: y = Kf0 + w.Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity RegularityChoice of : tradeo Noise level Regularity of f0 ||w|| J(f0 )No noise: 0+ , minimize f argmin J(f ) f RQ ,Kf =y
  14. 14. Smooth and Cartoon Priors J(f ) = || f (x)||2 dx | f |2
  15. 15. Smooth and Cartoon Priors J(f ) = || f (x)||2 dx J(f ) = || f (x)||dx J(f ) = length(Ct )dt R | f |2 | f|
  16. 16. Inpainting ExampleInput y = Kf0 + w Sobolev Total variation
  17. 17. Overview• Inverse Problems Regularization• Sparse Synthesis Regularization• Examples: Sparse Wavelet Regularizations• Iterative Soft Thresholding• Sparse Seismic Deconvolution
  18. 18. Redundant DictionariesDictionary =( m )m RQ N ,N Q. Q N
  19. 19. Redundant DictionariesDictionary =( m )m RQ N ,N Q.Fourier: m = ei ·, m frequency Q N
  20. 20. Redundant DictionariesDictionary =( m )m RQ N ,N Q. m = (j, , n)Fourier: m =e i ·, m frequency scale positionWavelets: m = (2 j R x n) orientation =1 =2 Q N
  21. 21. Redundant DictionariesDictionary =( m )m RQ N ,N Q. m = (j, , n)Fourier: m =e i ·, m frequency scale positionWavelets: m = (2 j R x n) orientationDCT, Curvelets, bandlets, . . . =1 =2 Q N
  22. 22. Redundant DictionariesDictionary =( m )m RQ N ,N Q. m = (j, , n)Fourier: m =e i ·, m frequency scale positionWavelets: m = (2 j R x n) orientationDCT, Curvelets, bandlets, . . .Synthesis: f = m xm m = x. =1 =2 Q =f x NCoe cients x Image f = x
  23. 23. Sparse Priors Coe cients xIdeal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0} Image f0
  24. 24. Sparse Priors Coe cients xIdeal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0}Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RN Image f0
  25. 25. Sparse Priors Coe cients xIdeal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0}Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RNOrthogonal : = = IdN f0 , m if | f0 , m | > T, xm = 0 otherwise. ST Image f0 f= ST (f0 )
  26. 26. Sparse Priors Coe cients xIdeal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0}Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RNOrthogonal : = = IdN f0 , m if | f0 , m | > T, xm = 0 otherwise. ST Image f0 f= ST (f0 )Non-orthogonal : NP-hard.
  27. 27. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image.Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0
  28. 28. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image.Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0 q = 1/2 q=1 q = 3/2 q=2 q priors: Jq (x) = |xm |q (convex for q 1) m
  29. 29. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image.Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0 q = 1/2 q=1 q = 3/2 q=2 q priors: Jq (x) = |xm |q (convex for q 1) mSparse 1 prior: J1 (x) = |xm | m
  30. 30. L1 Regularization x0 RNcoe cients
  31. 31. L1 Regularization x0 RN f0 = x0 RQcoe cients image
  32. 32. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RPcoe cients image observations K w
  33. 33. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RPcoe cients image observations K w = K ⇥ ⇥ RP N
  34. 34. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RPcoe cients image observations K w = K ⇥ ⇥ RP N Sparse recovery: f = x where x solves 1 min ||y x||2 + ||x||1 x RN 2 Fidelity Regularization
  35. 35. Noiseless Sparse RegularizationNoiseless measurements: y = x0 x x= y x argmin |xm | x=y m
  36. 36. Noiseless Sparse RegularizationNoiseless measurements: y = x0 x x x= x= y y x argmin |xm | x argmin |xm |2 x=y m x=y m
  37. 37. Noiseless Sparse RegularizationNoiseless measurements: y = x0 x x x= x= y y x argmin |xm | x argmin |xm |2 x=y m x=y mConvex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet].
  38. 38. Noisy Sparse RegularizationNoisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Data fidelity Regularization
  39. 39. Noisy Sparse RegularizationNoisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Equivalence Data fidelity Regularization x argmin ||x||1 || x y|| | x= x y|
  40. 40. Noisy Sparse RegularizationNoisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Equivalence Data fidelity Regularization x argmin ||x||1 || x y|| | x=Algorithms: x y| Iterative soft thresholding Forward-backward splitting see [Daubechies et al], [Pesquet et al], etc Nesterov multi-steps schemes.
  41. 41. Overview• Inverse Problems Regularization• Sparse Synthesis Regularization• Examples: Sparse Wavelet Regularizations• Iterative Soft Thresholding• Sparse Seismic Deconvolution
  42. 42. Image De-blurringOriginal f0 y = h f0 + w
  43. 43. Image De-blurring Original f0 y = h f0 + w Sobolev SNR=22.7dBSobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2 f RN ˆ h(⇥) ˆ f (⇥) = y (⇥) ˆ ˆ |h(⇥)|2 + |⇥|2
  44. 44. Image De-blurring Original f0 y = h f0 + w Sobolev Sparsity SNR=22.7dB SNR=24.7dBSobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2 f RN ˆ h(⇥) ˆ f (⇥) = y (⇥) ˆ ˆ |h(⇥)|2 + |⇥|2Sparsity regularization: = translation invariant wavelets. 1f = x where x argmin ||h ( x) y||2 + ||x||1 x 2
  45. 45. Comparison of Regularizations L2 regularization Sobolev regularization Sparsity regularizationSNR SNR SNR opt opt opt L2 Sobolev Sparsity InvariantSNR=21.7dB SNR=22.7dB SNR=23.7dB SNR=24.7dB
  46. 46. Inpainting Problem K 0 if x , (Kf )(x) = f (x) if x / .Measures: y = Kf0 + w
  47. 47. Image SeparationModel: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
  48. 48. Image SeparationModel: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
  49. 49. Image SeparationModel: f = f1 + f2 + w, (f1 , f2 ) components, w noise.Union dictionary: =[ 1, 2] RQ (N1 +N2 )Recovered component: fi = i xi . 1 (x1 , x2 ) argmin ||f x||2 + ||x||1 x=(x1 ,x2 ) RN 2
  50. 50. Examples of Decompositions
  51. 51. Cartoon+Texture Separation
  52. 52. Overview• Inverse Problems Regularization• Sparse Synthesis Regularization• Examples: Sparse Wavelet Regularizations• Iterative Soft Thresholding• Sparse Seismic Deconvolution
  53. 53. Sparse Regularization DenoisingDenoising: y = x0 + w 2 RN , K = Id. ⇤ ⇤Orthogonal-basis: = IdN , x = f.Regularization-based denoising: 1 x = argmin ||x y||2 + J(x) ? x2RN 2 PSparse regularization: J(x) = m |xm |q (where |a|0 = (a))
  54. 54. Sparse Regularization DenoisingDenoising: y = x0 + w 2 RN , K = Id. ⇤ ⇤Orthogonal-basis: = IdN , x = f.Regularization-based denoising: 1 x = argmin ||x y||2 + J(x) ? x2RN 2 PSparse regularization: J(x) = m |xm |q (where |a|0 = (a)) q x? m = ST (xm )
  55. 55. Surrogate FunctionalsSparse regularization: ? 1 x 2 argmin E(x) = ||y x||2 + ||x||1 x2RN 2 ⇤Surrogate functional: ⌧ < 1/|| || 1 2 1 E(x, x) = E(x) ˜ || (x x)|| + ||x ˜ x||2 ˜ 2 2⌧ E(·, x) ˜ E(·) x S ⌧ (u) x ˜
  56. 56. Surrogate FunctionalsSparse regularization: ? 1 x 2 argmin E(x) = ||y x||2 + ||x||1 x2RN 2 ⇤Surrogate functional: ⌧ < 1/|| || 1 2 1 E(x, x) = E(x) ˜ || (x x)|| + ||x ˜ x||2 ˜ 2 2⌧ E(·, x) ˜ Theorem: argmin E(x, x) = S ⌧ (u) ˜ E(·) x ⇤ where u = x ⌧ ( x x) ˜ x S ⌧ (u) x ˜ Proof: E(x, x) / 1 ||u ˜ 2 x||2 + ||x||1 + cst.
  57. 57. Iterative ThresholdingAlgorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x x
  58. 58. Iterative ThresholdingAlgorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x xRemark: x(`) 7! u(`) is a gradient descent of || x y||2 . 1 S`⌧ is the proximal step of associated to ||x||1 .
  59. 59. Iterative ThresholdingAlgorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x xRemark: x(`) 7! u(`) is a gradient descent of || x y||2 . 1 S`⌧ is the proximal step of associated to ||x||1 . ⇤ Theorem: if ⌧ < 2/|| ||, then x(`) ! x? .
  60. 60. Overview• Inverse Problems Regularization• Sparse Synthesis Regularization• Examples: Sparse Wavelet Regularizations• Iterative Soft Thresholding• Sparse Seismic Deconvolution
  61. 61. Seismic Imaging
  62. 62. 1D Idealization Initial condition: “wavelet” = band pass filter h 1D propagation convolution f =h f h(x) f ˆ h( )y = f0 h + w P
  63. 63. Pseudo InversePseudo-inverse: ˆ+ ( ) = y ( ) f ˆ = f + = h+ ⇥ y = f0 + h+ ⇥ w h( ) ˆ ˆ where h+ ( ) = h( ) 1 ˆ ˆStabilization: h+ (⇥) = 0 if |h(⇥)| ˆ h( ) y = h f0 + w ˆ 1/h( ) f+
  64. 64. Sparse Spikes Deconvolution f with small ||f ||0 y=f h+wSparsity basis: Diracs ⇥m [x] = [x m] 1 f = argmin ||f ⇥ h y||2 + |f [m]|. f RN 2 m
  65. 65. Sparse Spikes Deconvolution f with small ||f ||0 y=f h+wSparsity basis: Diracs ⇥m [x] = [x m] 1 f = argmin ||f ⇥ h y||2 + |f [m]|. f RN 2 mAlgorithm: < 2/|| ˆ || = 2/max |h(⇥)|2 ˜ • Inversion: f (k) = f (k) h ⇥ (h ⇥ f (k) y). ˜ f (k+1) [m] = S 1⇥ (f (k) [m])
  66. 66. Numerical Example f0Choosing optimal : oracle, minimize ||f0 f || y=f h+wSNR(f0 , f ) f
  67. 67. Convergence StudySparse deconvolution:f = argmin E(f ). f RN 1 Energy: E(f ) = ||h ⇥ f y||2 + |f [m]|. 2 mNot strictly convex = no convergence speed.log10 (E(f (k) )/E(f ) 1) log10 (||f (k) f ||/||f0 ||) k k
  68. 68. Conclusion
  69. 69. Conclusion
  70. 70. Conclusion

×