39. Dissertation
Defense
Overview
of
Proposed
Research
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
April
14th,
2014 8
40. Dissertation
Defense
Overview
of
Proposed
Research
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
April
14th,
2014 8
41. Dissertation
Defense
Overview
of
Proposed
Research
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
April
14th,
2014 8
Part
1
86. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
Part
1
The
Rater
Model
Confusion
Matrix:
87. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
88. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
89. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
90. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
91. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
92. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
93. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
94. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
95. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
Partition
Function
96. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
Partition
Function
97. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
Partition
Function
98. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Simultaneous
Truth
and
Performance
Level
Estimation
(STAPLE)
(Warfield,
et
al.
2004)
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
Partition
Function
99. Dissertation
Defense
Statistical
Label
Fusion
Expectation-‐Maximization
(EM)
April
14th,
2014 11
E-‐Step:
Estimate
the
Labels
M-‐Step:
Update
the
Model
Part
1
The
Rater
Model
Simultaneous
Truth
and
Performance
Level
Estimation
(STAPLE)
(Warfield,
et
al.
2004)
Confusion
Matrix:
Prior
Partition
Function
Prior
Partition
Function
Partition
Function
103. Dissertation
Defense
So,
what’s
the
problem?
The
traditional
rater
performance
models
are
too
simple
Despite
elegant
theory,
STAPLE
methods
are
consistently
outperformed
by
ad
hoc
voting-‐based
techniques
Thus,
we
need
models
that
characterize:
▪ 1)
Spatially-‐varying
Rater
(Atlas)
Performance
▪ 2)
Imperfect
Correspondence
▪ 3)
Hierarchical
Performance
Estimation
April
14th,
2014 13
Part
1
104. Dissertation
Defense
So,
what’s
the
problem?
The
traditional
rater
performance
models
are
too
simple
Despite
elegant
theory,
STAPLE
methods
are
consistently
outperformed
by
ad
hoc
voting-‐based
techniques
Thus,
we
need
models
that
characterize:
▪ 1)
Spatially-‐varying
Rater
(Atlas)
Performance
▪ 2)
Imperfect
Correspondence
▪ 3)
Hierarchical
Performance
Estimation
April
14th,
2014 13
Part
1
105. Dissertation
Defense
So,
what’s
the
problem?
The
traditional
rater
performance
models
are
too
simple
Despite
elegant
theory,
STAPLE
methods
are
consistently
outperformed
by
ad
hoc
voting-‐based
techniques
Thus,
we
need
models
that
characterize:
▪ 1)
Spatially-‐varying
Rater
(Atlas)
Performance
▪ 2)
Imperfect
Correspondence
▪ 3)
Hierarchical
Performance
Estimation
April
14th,
2014 13
Part
1
106. Dissertation
Defense
So,
what’s
the
problem?
The
traditional
rater
performance
models
are
too
simple
Despite
elegant
theory,
STAPLE
methods
are
consistently
outperformed
by
ad
hoc
voting-‐based
techniques
Thus,
we
need
models
that
characterize:
▪ 1)
Spatially-‐varying
Rater
(Atlas)
Performance
▪ 2)
Imperfect
Correspondence
▪ 3)
Hierarchical
Performance
Estimation
April
14th,
2014 13
Part
1
107. Dissertation
Defense
So,
what’s
the
problem?
The
traditional
rater
performance
models
are
too
simple
Despite
elegant
theory,
STAPLE
methods
are
consistently
outperformed
by
ad
hoc
voting-‐based
techniques
Thus,
we
need
models
that
characterize:
▪ 1)
Spatially-‐varying
Rater
(Atlas)
Performance
▪ 2)
Imperfect
Correspondence
▪ 3)
Hierarchical
Performance
Estimation
April
14th,
2014 13
Part
1
108. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 14
Part
1
109. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 14
Part
1
110. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 14
Contribution
1 Part
1
111. Dissertation
Defense
The
Spatial
Problem
April
14th,
2014 15
Raters
(or
atlases)
do
not
always
perform
consistently
▪ Global
performance
evaluation
is
theoretically
sub-‐optimal.
Part
1Contribution
1
112. Dissertation
Defense
Our
Proposed
Solution
Spatially-‐Varying
Performance
April
14th,
2014 16
Reformulate
STAPLE
to
allow
for
voxelwise
performance
estimates
▪ Define
semi-‐local
region
over
which
voxelwise
estimates
are
calculated
▪ We
call
this
algorithm
Spatial
STAPLE
Part
1Contribution
1
113. Dissertation
Defense
Spatial
STAPLE
Theory:
Redefining
the
Rater
Model
April
14th,
2014 17
Allow
each
rater
to
be
characterized
by
multiple
confusion
matrices
Each
local
confusion
matrix
is
defined
over
a
“pooling
region”
▪
is
defined
over
region
(a
semi-‐local
neighborhood)
Part
1Contribution
1
118. Dissertation
Defense
Spatial
STAPLE
Theory:
EM
Algorithm
April
14th,
2014 18
E-‐Step
M-‐Step
A
simple,
yet
powerful
modification
to
the
STAPLE
framework.
Part
1Contribution
1
119. Dissertation
Defense
Methods
and
Results
April
14th,
2014 19
Manual
Labeling
of
Malignant
Glioma
▪ Gd-‐enhanced
T1-‐weighted
images
▪ Approximately
1x1x3
mm
resolution
Multi-‐Atlas
Segmentation
of
Head
and
Neck
Anatomy
▪ CT
images
▪ Approximately
1x1x3
mm
resolution
Part
1Contribution
1
124. Dissertation
Defense
Summary
and
Contributions
April
14th,
2014 22
Spatial
STAPLE
▪ Enables
smooth
spatially-‐varying
estimates
of
rater
performance
▪ Provides
significant
improvement
in
segmentation
accuracy
▪ Finished
5th
(out
of
25)
in
2012
MICCAI
Challenge
on
Multi-‐Atlas
Labeling
Publications
▪ Andrew
J.
Asman
and
Bennett
A.
Landman,
“Formulating
Spatially
Varying
Performance
in
the
Statistical
Fusion
Framework”,
IEEE
Transactions
on
Medical
Imaging.
June
2012.
▪ Andrew
J.
Asman
and
Bennett
A.
Landman.
“Characterizing
Spatially
Varying
Performance
to
Improve
Multi-‐Atlas
Multi-‐Label
Segmentation”,
In
Proceedings
of
the
Conference
on
Information
Processing
in
Medical
Imaging
(IPMI),
Germany,
July
2011
Part
1Contribution
1
125. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 23
Part
1
126. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 23
Part
1
127. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 23
Contribution
2 Part
1
136. Dissertation
Defense
Our
Proposed
Solution
Non-‐Local
Correspondence
April
14th,
2014 26
Part
1Contribution
2
TargetAtlas
Atlas-‐Target
Correspondence
137. Dissertation
Defense
Our
Proposed
Solution
Non-‐Local
Correspondence
April
14th,
2014 26
Part
1Contribution
2
TargetAtlas
Atlas-‐Target
Correspondence
Non-‐Local
Means
(Buades,
et
al.
2005)
138. Dissertation
Defense
Our
Proposed
Solution
Non-‐Local
Correspondence
April
14th,
2014 26
Part
1Contribution
2
TargetAtlas
Atlas-‐Target
Correspondence
Non-‐Local
Means
(Buades,
et
al.
2005)
Non-‐Local
STAPLE
139. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
Non-‐Local
Correspondence
Model
April
14th,
2014 27
A
(non-‐local)
correspondence
model
defines
the
probability
density
function:
Here,
we
define
a
non-‐local
correspondence
model
given
two
neighborhoods
▪ The
search
neighborhood
▪ The
patch
neighborhood
▪ s.t.
Part
1Contribution
2
140. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
Non-‐Local
Correspondence
Model
April
14th,
2014 27
A
(non-‐local)
correspondence
model
defines
the
probability
density
function:
Here,
we
define
a
non-‐local
correspondence
model
given
two
neighborhoods
▪ The
search
neighborhood
▪ The
patch
neighborhood
▪ s.t.
Part
1Contribution
2
141. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
Non-‐Local
Correspondence
Model
April
14th,
2014 27
A
(non-‐local)
correspondence
model
defines
the
probability
density
function:
Here,
we
define
a
non-‐local
correspondence
model
given
two
neighborhoods
▪ The
search
neighborhood
▪ The
patch
neighborhood
▪ s.t.
Part
1Contribution
2
142. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
Redefining
the
Rater
Model
April
14th,
2014 28
Using
the
non-‐local
correspondence
model,
we
redefine
the
rater
model
Part
1Contribution
2
143. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
Redefining
the
Rater
Model
April
14th,
2014 28
Using
the
non-‐local
correspondence
model,
we
redefine
the
rater
model
What
label
the
rater
meant
to
observe
Part
1Contribution
2
147. Dissertation
Defense
Non-‐Local
STAPLE
Theory:
EM
Algorithm
April
14th,
2014 29
E-‐Step
M-‐Step
A
straightforward
theoretically-‐
elegant
way
to
incorporate
non-‐
local
intensity
correspondence
Part
1Contribution
2
154. Dissertation
Defense
Summary
and
Contributions
April
14th,
2014 33
Non-‐Local
STAPLE
▪ Enables
direct
mechanism
for
incorporating
registration
uncertainty
and
image
intensity
into
the
STAPLE
framework
▪ Provides
significant
improvement
in
segmentation
accuracy
▪ Finished
2nd
(out
of
25)
in
2012
MICCAI
Challenge
on
Multi-‐
Atlas
Labeling
Publications
▪ Andrew
J.
Asman
and
Bennett
A.
Landman,
“Non-‐Local
Statistical
Label
Fusion
for
Multi-‐
Atlas
Segmentation”,
Medical
Image
Analysis,
February
2013.
▪ Andrew
J.
Asman
and
Bennett
A.
Landman.
“
Non-‐Local
STAPLE:
An
Intensity-‐Driven
Multi-‐
Atlas
Rater
Model”,
In
International
Conference
on
Medical
Image
Computing
and
Computer
Assisted
Intervention
(MICCAI),
Nice,
France,
September
2012
Part
1Contribution
2
155. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 34
Part
1
156. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 34
Part
1
157. Dissertation
Defense
Overview
of
Contributions
Part
1:
Theory
Contribution
1
▪ Characterizing
spatially-‐
varying
performance
Contribution
2
▪ Incorporating
imperfect
correspondence
Contribution
3
▪ Hierarchical
performance
estimation
April
14th,
2014 34
Contribution
3 Part
1
161. Dissertation
Defense
The
Hierarchy
Problem
April
14th,
2014 35
Brain Cerebrum
Cerebellum
Cerebral
Cortex
Cerebral
White
Matter
Deep
Brain
Structures
….
Part
1Contribution
3
162. Dissertation
Defense
The
Hierarchy
Problem
April
14th,
2014 35
Brain Cerebrum
Cerebellum
Cerebral
Cortex
Cerebral
White
Matter
Deep
Brain
Structures
….
All
Labels
Part
1Contribution
3
163. Dissertation
Defense
The
Hierarchy
Problem
April
14th,
2014 35
Brain Cerebrum
Cerebellum
Cerebral
Cortex
Cerebral
White
Matter
Deep
Brain
Structures
….
All
Labels
How
can
we
estimate
a
unified
model
of
hierarchical
performance?
Part
1Contribution
3
164. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance
Part
1Contribution
3
165. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance
Part
1Contribution
3
166. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance
Part
1Contribution
3
167. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance
Part
1Contribution
3
168. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Part
1Contribution
3
169. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Part
1Contribution
3
170. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Part
1Contribution
3
171. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Part
1Contribution
3
172. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Part
1Contribution
3
173. Dissertation
Defense
Our
Proposed
Solution
The
Hierarchical
Performance
Model
April
14th,
2014 36
Traditional
Performance Hierarchical
Performance
Constrained
geometric
mean
of
performance
across
the
hierarchy
Part
1Contribution
3
176. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Generative
Model
of
Performance
Part
1Contribution
3
177. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
Generative
Model
of
Performance
Part
1Contribution
3
178. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
Generative
Model
of
Performance
Part
1Contribution
3
179. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Exponential
partition
function
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
Generative
Model
of
Performance
Part
1Contribution
3
180. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Exponential
partition
function
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
Generative
Model
of
Performance
Part
1Contribution
3
181. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Exponential
partition
function
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
Generative
Model
of
Performance
Part
1Contribution
3
182. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Exponential
partition
function
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
is
constrained
such
that:
Generative
Model
of
Performance
Part
1Contribution
3
183. Dissertation
Defense
Hierarchical
Performance
Model
Theory
April
14th,
2014 37
Observed
label
at
current
voxel
True
label
at
current
voxel
Hierarchical
confusion
matrices
Exponential
partition
function
Hierarchical
mapping
vector
(
:
label
s
at
hierarchy
level
m)
is
constrained
such
that:
Generative
Model
of
Performance
Part
1Contribution
3
194. Dissertation
Defense
Expectation-‐Maximization
(E-‐Step)
Estimation
of
the
Label
Probabilities
April
14th,
2014 38
Partition
Function
Partition
Function
Exactly
the
same
as
the
classic
statistical
fusion
derivation
with
updated
hierarchical
performance
model
Prior
Prior
Part
1Contribution
3
228. Dissertation
Defense
Summary
and
Contributions
April
14th,
2014 46
Hierarchical
Performance
Estimation
▪ Fundamental
advancement
to
statistical
fusion
performance
modeling
▪ Provides
significant
improvement
in
segmentation
accuracy
▪ Highly
amenable
to
state-‐of-‐the-‐art
statistical
fusion
▪ Best
Student
Paper
Finalist
–
SPIE
Medical
Imaging
2014.
Publications
▪ Andrew
J.
Asman
and
Bennett
A.
Landman,
“Hierarchical
Performance
Estimation
in
the
Statistical
Label
Fusion
Framework”,
Medical
Image
Analysis,
Conditionally
Accepted,
April
2014.
▪ Andrew
J.
Asman,
Alexander
S.
Dagley,
and
Bennett
A.
Landman.
“Statistical
label
fusion
with
hierarchical
performance
models”,
In
Proceedings
of
the
SPIE
Medical
Imaging
Conference.
San
Diego,
California,
February
2014
(Oral
Presentation).
Part
1Contribution
3
229. Dissertation
Defense
Overview
of
Proposed
Research
April
14th,
2014 47
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
230. Dissertation
Defense
Overview
of
Proposed
Research
April
14th,
2014 47
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
231. Dissertation
Defense
Overview
of
Proposed
Research
April
14th,
2014 47
Part
2
Part
1:
Theory
▪ Defining
theoretically
optimal
performance
models
in
the
label
fusion
framework.
Part
2:
Applications
▪ Robust
multi-‐atlas
segmentation
in
the
presence
of
highly
variable
atlas-‐target
correspondences
▪ Removing
the
need
for
expensive
pairwise
registrations
through
big
data
paradigms
232. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 48
Part
2
233. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 48
Part
2
234. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 48
Contribution
1 Part
2
236. Dissertation
Defense
Why
the
spinal
cord?
April
14th,
2014 49
Affected
by
numerous
neurological
conditions
▪ E.g.
-‐-‐
ALS,
MS
Contribution
1 Part
2
237. Dissertation
Defense
Why
the
spinal
cord?
April
14th,
2014 49
Affected
by
numerous
neurological
conditions
▪ E.g.
-‐-‐
ALS,
MS
Reasonable
MR
contrast
for
internal
structure
only
recently
feasible
▪ No
automated
GM/WM
segmentation
has
been
reported.
Contribution
1 Part
2
238. Dissertation
Defense
Why
the
spinal
cord?
April
14th,
2014 49
Affected
by
numerous
neurological
conditions
▪ E.g.
-‐-‐
ALS,
MS
Reasonable
MR
contrast
for
internal
structure
only
recently
feasible
▪ No
automated
GM/WM
segmentation
has
been
reported.
Contribution
1 Part
2
239. Dissertation
Defense
Why
the
spinal
cord?
April
14th,
2014 49
Affected
by
numerous
neurological
conditions
▪ E.g.
-‐-‐
ALS,
MS
Reasonable
MR
contrast
for
internal
structure
only
recently
feasible
▪ No
automated
GM/WM
segmentation
has
been
reported.
Contribution
1 Part
2
240. Dissertation
Defense
Why
the
spinal
cord?
April
14th,
2014 49
Affected
by
numerous
neurological
conditions
▪ E.g.
-‐-‐
ALS,
MS
Reasonable
MR
contrast
for
internal
structure
only
recently
feasible
▪ No
automated
GM/WM
segmentation
has
been
reported.
It’s
challenging.
Contribution
1 Part
2
243. Dissertation
Defense
Approach
Overview
April
14th,
2014 50
Process
each
axial
slice
independently
Build
a
consistent
model
of
spinal
cord
appearance
variability
Contribution
1 Part
2
244. Dissertation
Defense
Approach
Overview
April
14th,
2014 50
Process
each
axial
slice
independently
Build
a
consistent
model
of
spinal
cord
appearance
variability
Perform
model-‐informed
multi-‐
atlas
segmentation
Contribution
1 Part
2
245. Dissertation
Defense
Modeling
Spinal
Cord
Variability
April
14th,
2014 51
Register
all
atlas
slices
to
the
same
space
▪ 3
d.o.f.
rigid
registration
Contribution
1 Part
2
246. Dissertation
Defense
Modeling
Spinal
Cord
Variability
April
14th,
2014 51
Register
all
atlas
slices
to
the
same
space
▪ 3
d.o.f.
rigid
registration
Create
groupwise
appearance
model
▪ Principal
Component
Analysis
(PCA)
Contribution
1 Part
2
247. Dissertation
Defense
Model-‐based
Groupwise
Registration
April
14th,
2014 52
Register
the
target
to
the
groupwise-‐consistent
model
representation
of
the
spinal
cord.
Contribution
1 Part
2
248. Dissertation
Defense
Model-‐based
Groupwise
Registration
April
14th,
2014 52
Register
the
target
to
the
groupwise-‐consistent
model
representation
of
the
spinal
cord.
Contribution
1 Part
2
249. Dissertation
Defense
Model-‐based
Groupwise
Registration
April
14th,
2014 52
Register
the
target
to
the
groupwise-‐consistent
model
representation
of
the
spinal
cord.
Contribution
1 Part
2
250. Dissertation
Defense
Model-‐based
Groupwise
Registration
April
14th,
2014 52
Register
the
target
to
the
groupwise-‐consistent
model
representation
of
the
spinal
cord.
Contribution
1 Part
2
253. Dissertation
Defense
Estimation
of
Final
Segmentation
April
14th,
2014 53
Fuse
geodesically
appropriate
atlases
Use
inverse
transformation
to
return
to
target
space
Contribution
1 Part
2
254. Dissertation
Defense
Data
and
Methods
April
14th,
2014 54
67
T2*w
MR
volumes
of
the
cervical
spinal
cord
▪ 3T
Philips
Achieva
scanner
▪ Field
of
view
of
approximately
190×224×90
mm3
▪ Nominal
resolution
of
0.6×0.6×3
mm3
“Ground
truth”
labels
obtained
from
experienced
rater
We
consider
various
fusion
algorithms
using:
▪ Pairwise
volumetric
registration
(ANTs)
▪ Pairwise
slice-‐based
registration
(Nifty
Reg)
▪ Proposed
Groupwise
registration
Contribution
1 Part
2
257. Dissertation
Defense
Summary
and
Contributions
April
14th,
2014 57
Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord
▪ Models
cervical
spinal
cord
appearance
variability
▪ Robust
(and
efficient)
groupwise
registration
▪ Model-‐informed
atlas
selection
▪ The
first
fully-‐automated
approach
for
segmenting
the
spinal
cord
internal
structure.
Publications
▪ Andrew
J.
Asman,
Seth
A.
Smith,
Daniel
S.
Reich
and
Bennett
A.
Landman.
"Robust
GM/WM
Segmentation
of
the
Spinal
Cord
with
Iterative
Non-‐Local
Statistical
Fusion”,
In
MICCAI,
Nagoya,
Japan,
September
2013
▪ Andrew
J.
Asman,
Frederick
W.
Bryan,
Seth
A.
Smith,
Daniel
S.
Reich
and
Bennett
A.
Landman.
“Groupwise
Multi-‐Atlas
Segmentation
of
the
Spinal
Cord’s
Internal
Structure”,
Medical
Image
Analysis,
April
2014.
Contribution
1 Part
2
258. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 58
Part
2
259. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 58
Part
2
260. Dissertation
Defense
Overview
of
Contributions
Part
2:
Applications
Contribution
1
▪ Groupwise
multi-‐atlas
segmentation
of
the
spinal
cord’s
internal
structure
Contribution
2
▪ Geodesic
Learner
Fusion
April
14th,
2014 58
Contribution
2 Part
2
271. Dissertation
Defense
Geodesic
Learner
Fusion
60April
14th,
2014
Given
a
database
of
pre-‐computed
multi-‐atlas
segmentations
Contribution
2 Part
2
272. Dissertation
Defense
Geodesic
Learner
Fusion
60April
14th,
2014
Given
a
database
of
pre-‐computed
multi-‐atlas
segmentations
Can
we
use
machine
learning
to
map
a
weak
initial
estimate,
to
the
multi-‐atlas
segmentation
estimate?
Contribution
2 Part
2
273. Dissertation
Defense
Don’t
you
need
a
lot
of
data….?
61April
14th,
2014
Training Testing Reproducibility
1000 Functional Connectome (fcon_1000) 1055 (1055) 117 (117)
Baltimore Longitudinal Study on Aging (BLSA) 578 (883) 64 (94)
Information eXtraction from Images (IXI) 523 (523) 58 (58)
Deep Brain Stimulation (DBS) 493 (493) 54 (54)
Open Access Series on Imaging Studies (OASIS) 375 (392) 41 (44)
Tennessee Twins Study (TTS) 113 (118) 13 (13)
Multi-Modal MRI Reproducibility Resource (MMMRR) 21 (42)
Total: 3137 (3464) 347 (380) 21(42)
a:
https://www.nitrc.org/projects/fcon_1000
b:
http://www.oasis
–brains.org/
c:
http://biomedic.doc.ic.ac.uk/brain-‐development/
d:
https://www.nitrc.org/projects/multimodal
Contribution
2 Part
2
274. Dissertation
Defense
Don’t
you
need
a
lot
of
data….?
61April
14th,
2014
Training Testing Reproducibility
1000 Functional Connectome (fcon_1000) 1055 (1055) 117 (117)
Baltimore Longitudinal Study on Aging (BLSA) 578 (883) 64 (94)
Information eXtraction from Images (IXI) 523 (523) 58 (58)
Deep Brain Stimulation (DBS) 493 (493) 54 (54)
Open Access Series on Imaging Studies (OASIS) 375 (392) 41 (44)
Tennessee Twins Study (TTS) 113 (118) 13 (13)
Multi-Modal MRI Reproducibility Resource (MMMRR) 21 (42)
Total: 3137 (3464) 347 (380) 21(42)
a:
https://www.nitrc.org/projects/fcon_1000
b:
http://www.oasis
–brains.org/
c:
http://biomedic.doc.ic.ac.uk/brain-‐development/
d:
https://www.nitrc.org/projects/multimodal
Contribution
2 Part
2
276. Dissertation
Defense
Building
a
Database
Offline
Multi-‐Atlas
Segmentation
62April
14th,
2014
Contribution
2 Part
2
Original
Atlases
▪ 45
subjects
MPRAGE,
OASIS
(Marcus,
et
al.
2007)
▪ BrainCOLOR
protocol
(133
labels)
(Klein,
et
al.
2010)
277. Dissertation
Defense
Building
a
Database
Offline
Multi-‐Atlas
Segmentation
62April
14th,
2014
Contribution
2 Part
2
Original
Atlases
▪ 45
subjects
MPRAGE,
OASIS
(Marcus,
et
al.
2007)
▪ BrainCOLOR
protocol
(133
labels)
(Klein,
et
al.
2010)
For
each
training
image:
▪ Affinely
registered
to
the
MNI305
atlas
(Collins,
et
al.
2004)
▪ Pairwise
Affine
(Ourselin,
et
al.
2001)
+
Non-‐Rigid
Registration
(Avants,
et
al.
2011)
▪ Fused
using
Hierarchical
Non-‐Local
Spatial
STAPLE
▪ Classifier-‐based
Segmentation
Correction
(Wang,
et
al.
2011)
278. Dissertation
Defense
Building
a
Database
Offline
Multi-‐Atlas
Segmentation
62April
14th,
2014
Contribution
2 Part
2
Original
Atlases
▪ 45
subjects
MPRAGE,
OASIS
(Marcus,
et
al.
2007)
▪ BrainCOLOR
protocol
(133
labels)
(Klein,
et
al.
2010)
For
each
training
image:
▪ Affinely
registered
to
the
MNI305
atlas
(Collins,
et
al.
2004)
▪ Pairwise
Affine
(Ourselin,
et
al.
2001)
+
Non-‐Rigid
Registration
(Avants,
et
al.
2011)
▪ Fused
using
Hierarchical
Non-‐Local
Spatial
STAPLE
▪ Classifier-‐based
Segmentation
Correction
(Wang,
et
al.
2011)
279. Dissertation
Defense
Building
a
Database
Offline
Multi-‐Atlas
Segmentation
62April
14th,
2014
Contribution
2 Part
2
Original
Atlases
▪ 45
subjects
MPRAGE,
OASIS
(Marcus,
et
al.
2007)
▪ BrainCOLOR
protocol
(133
labels)
(Klein,
et
al.
2010)
For
each
training
image:
▪ Affinely
registered
to
the
MNI305
atlas
(Collins,
et
al.
2004)
▪ Pairwise
Affine
(Ourselin,
et
al.
2001)
+
Non-‐Rigid
Registration
(Avants,
et
al.
2011)
▪ Fused
using
Hierarchical
Non-‐Local
Spatial
STAPLE
▪ Classifier-‐based
Segmentation
Correction
(Wang,
et
al.
2011)
Low-‐dimensional
representation
computed
using
Principal
Component
Analysis
(PCA)
280. Dissertation
Defense
Building
a
Database
Summary
63April
14th,
2014
Contribution
2 Part
2
Summary
shown
for
all
3464
training
images.
Multi-‐atlas
performed
on
all
380
testing
images,
and
42
reproducibility
images,
but
not
included
in
model.
306. Dissertation
Defense
Summary
and
Contributions
April
14th,
2014 70
Geodesic
Learner
Fusion
▪ Dramatically
lessens
the
computational
burden
of
multi-‐
atlas
segmentation
▪ 36
hrs
-‐>
3-‐8
minutes.
▪ Results
in
segmentations
that
are
highly
comparable
to
the
reference
multi-‐atlas
estimate
▪ Very
high
intra-‐subject
reproducibility
Publications
▪ Andrew
J.
Asman,
Andrew
J.
Plassard,
and
Bennett
A.
Landman.
“Geodesic
Learner
Fusion
or:
How
We
Learned
to
Stop
Worrying
and
Love
Big
Data”,
Submitted
to
MICCAI,
Boston,
MA,
September
2014
Contribution
2 Part
2
307. Dissertation
Defense
Concluding
Remarks
April
14th,
2014 71
Theoretical
Advancements
▪ Characterizing
spatially-‐varying
performance
▪ Spatial
STAPLE
▪ Accounting
for
imperfect
correspondence
▪ Non-‐Local
STAPLE
▪ Estimating
hierarchical
performance
models
▪ Hierarchical
STAPLE
▪ Bringing
it
all
together
▪ Hierarchical
Non-‐Local
Spatial
STAPLE
Novel
Applications
▪ Groupwise
segmentation
of
the
spinal
cord’s
internal
structure
▪ Reducing
the
computational
burden
through
machine
learning
▪ Geodesic
Learner
Fusion