4. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
5. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
Inpainting: set of missing pixels, P = Q | |.
0 if x ,
(Kf )(x) =
f (x) if x / .
K
6. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
Inpainting: set of missing pixels, P = Q | |.
0 if x ,
(Kf )(x) =
f (x) if x / .
Super-resolution: Kf = (f k) , P = Q/ .
K K
11. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
12. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
Choice of : tradeo
Noise level Regularity of f0
||w|| J(f0 )
13. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
Choice of : tradeo
Noise level Regularity of f0
||w|| J(f0 )
No noise: 0+ , minimize f argmin J(f )
f RQ ,Kf =y
20. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
=1 =2
Q
N
21. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
DCT, Curvelets, bandlets, . . .
=1 =2
Q
N
22. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
DCT, Curvelets, bandlets, . . .
Synthesis: f = m xm m = x. =1 =2
Q =f
x
N
Coe cients x Image f = x
23. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Image f0
24. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Image f0
25. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Orthogonal : = = IdN
f0 , m if | f0 , m | > T,
xm =
0 otherwise. ST Image f0
f= ST (f0 )
26. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Orthogonal : = = IdN
f0 , m if | f0 , m | > T,
xm =
0 otherwise. ST Image f0
f= ST (f0 )
Non-orthogonal :
NP-hard.
32. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
33. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
= K ⇥ ⇥ RP N
34. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
= K ⇥ ⇥ RP N
Sparse recovery: f = x where x solves
1
min ||y x||2 + ||x||1
x RN 2
Fidelity Regularization
37. Noiseless Sparse Regularization
Noiseless measurements: y = x0
x
x
x= x=
y y
x argmin |xm | x argmin |xm |2
x=y m x=y m
Convex linear program.
Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”.
Douglas-Rachford splitting, see [Combettes, Pesquet].
39. Noisy Sparse Regularization
Noisy measurements: y = x0 + w
1
x argmin ||y x||2 + ||x||1
x RQ 2 Equivalence
Data fidelity Regularization
x argmin ||x||1
|| x y||
|
x=
x y|
40. Noisy Sparse Regularization
Noisy measurements: y = x0 + w
1
x argmin ||y x||2 + ||x||1
x RQ 2 Equivalence
Data fidelity Regularization
x argmin ||x||1
|| x y||
|
x=
Algorithms: x y|
Iterative soft thresholding
Forward-backward splitting
see [Daubechies et al], [Pesquet et al], etc
Nesterov multi-steps schemes.
43. Image De-blurring
Original f0 y = h f0 + w Sobolev
SNR=22.7dB
Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2
f RN
ˆ
h(⇥)
ˆ
f (⇥) = y (⇥)
ˆ
ˆ
|h(⇥)|2 + |⇥|2
44. Image De-blurring
Original f0 y = h f0 + w Sobolev Sparsity
SNR=22.7dB SNR=24.7dB
Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2
f RN
ˆ
h(⇥)
ˆ
f (⇥) = y (⇥)
ˆ
ˆ
|h(⇥)|2 + |⇥|2
Sparsity regularization: = translation invariant wavelets.
1
f = x where x argmin ||h ( x) y||2 + ||x||1
x 2
56. Surrogate Functionals
Sparse regularization:
? 1
x 2 argmin E(x) = ||y x||2 + ||x||1
x2RN 2
⇤
Surrogate functional: ⌧ < 1/|| ||
1 2 1
E(x, x) = E(x)
˜ || (x x)|| + ||x
˜ x||2
˜
2 2⌧
E(·, x)
˜
Theorem:
argmin E(x, x) = S ⌧ (u)
˜ E(·)
x
⇤
where u = x ⌧ ( x x)
˜
x
S ⌧ (u) x
˜
Proof: E(x, x) / 1 ||u
˜ 2 x||2 + ||x||1 + cst.
57. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
58. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
Remark:
x(`) 7! u(`) is a gradient descent of || x y||2 .
1
S`⌧ is the proximal step of associated to ||x||1 .
59. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
Remark:
x(`) 7! u(`) is a gradient descent of || x y||2 .
1
S`⌧ is the proximal step of associated to ||x||1 .
⇤
Theorem: if ⌧ < 2/|| ||, then x(`) ! x? .