2. Registra-on
• Image
registra-on
– Define
geometric
transforma-ons
T
that
will
map
co-‐
ordinats
between
one
image
onto
another
image
such
that
some
image
quality
criterion
is
maximized.
– also
referred
to
as
image
fusion,
superimposi-on,
matching
or
merge
3. Registra-on
algorithms
• Used
to
find
the
transforma-on
• Rigid
&
affine
– Landmark
based
– Edge
based
– Voxel
intensity
based
– Informa,on
theory
based
• Non-‐rigid
– Registra-on
using
basis
func-ons
– Registra-on
using
splines
– Physics
based
• Elas-c,
Fluid,
Op-cal
flow,
etc.
4. Rigid
Body
Registra-on
of
medical
images
• The
anatomical
and
pathological
structures
do
not
deform
during
image
acquisi-ons
• Tissue
deforma-ons
ignored
and
register
images
using
rigid
body
transforms
• only
rota-ons
and
transla-ons
• 6
degrees
of
freedom:
3
transla-ons
and
3
rota-ons
• Key
Characteris-c:
All
distances
are
preserved
5. 3D
Rigid-‐body
Transforma-ons
• A
3D
rigid
body
transform
is
defined
by:
– 3
transla-ons
-‐
in
X,
Y
&
Z
direc-ons
– 3
rota-ons
-‐
about
X,
Y
&
Z
axes
• The
order
of
the
opera-ons
maZers
Transla-ons
Pitch
about
x
axis
Roll
about
y
axis
Yaw
about
z
axis
6. Informa-on
theory
based
Rigid
body
Registra-on
• Image
registra-on
is
considered
as
to
maximize
the
amount
of
shared
informa-on
in
two
images
– reducing
the
amount
of
informa-on
in
the
combined
image
• Algorithms
used
– Joint
entropy
• Joint
entropy
measures
the
amount
of
informa-on
in
the
two
images
combined
– Mutual
informa,on
• A
measure
of
how
well
one
image
explains
the
other,
and
is
maximized
at
the
op,mal
alignment
– Normalized
Mutual
Informa-on
7. Measures
of
Informa-on
• Hartley
defined
the
first
informa-on
measure:
– H
=
n
log
s
– n
is
the
length
of
the
message
and
s
is
the
number
of
possible
values
for
each
symbol
in
the
message
– Assumes
all
symbols
equally
likely
to
occur
• Shannon
proposed
variant
(Shannon’s
Entropy)
• weighs
the
informa-on
based
on
the
probability
that
an
outcome
will
occur
• second
term
shows
the
amount
of
informa-on
an
event
provides
is
inversely
propor-onal
to
its
probability
of
occurring
8. Three
Interpreta-ons
of
Entropy
• The
amount
of
informa-on
an
event
provides
– An
infrequently
occurring
event
provides
more
informa-on
than
a
frequently
occurring
event
• The
uncertainty
in
the
outcome
of
an
event
– Systems
with
one
very
common
event
have
less
entropy
than
systems
with
many
equally
probable
events
• The
dispersion
in
the
probability
distribu-on
– An
image
of
a
single
amplitude
has
a
less
disperse
histogram
than
an
image
of
many
greyscales
• the
lower
dispersion
implies
lower
entropy
9. Joint
Entropy
for
Image
Registra-on
• Define
a
joint
probability
distribu-on:
– Generate
a
2-‐D
histogram
where
each
axis
is
the
number
of
possible
greyscale
values
in
each
image
– each
histogram
cell
is
incremented
each
-me
a
pair
(I_1(x,y),
I_2(x,y))
occurs
in
the
pair
of
images
• If
the
images
are
perfectly
aligned
then
the
histogram
is
highly
focused.
As
the
images
mis-‐align
the
dispersion
grows
• recall
Entropy
is
a
measure
of
histogram
dispersion
10. Entropy
for
Image
Registra-on
• Using
joint
entropy
for
registra-on
– Define
joint
entropy
to
be:
– Images
are
registered
when
one
is
transformed
rela-ve
to
the
other
to
minimize
the
joint
entropy
– The
dispersion
in
the
joint
histogram
is
thus
minimized
11. Joint
entropy:
overlap
problem
aligned
MR/MR
MR/CT
MR/PET
2mm
5mm
• Joint
entropy
very
sensi-ve
to
mapping
of
posi-on
and
intensity
• ‘blur’
with
increasing
misregistra-on
• May
lead
to
incorrect
solu-on
Figure
from
Hill
et.al.,
Voxel
Similarity
measures
for
automated
image
registra3on,
1994,
Proc.
SPIE,
2359
12. Solu-on:
Mutual
Informa-on
A
solu-on
to
the
overlap
problem
from
which
joint
entropy
suffers
is
to
consider
the
informa-on
contributed
to
the
overlapping
volume
by
each
image
being
registered
together
with
the
joint
informa-on.
The
informa-on
contributed
by
the
individual
images
is
simply
the
entropy
of
the
por-on
of
the
image
that
overlaps
with
the
other
image
volume
13. Defini-ons
of
Mutual
Informa-on
• Three
commonly
used
defini-ons:
– 1)
MI(A,B)
=
H(B)
-‐
H(B|A)
=
H(A)
-‐
H(A|B)
• Mutual
informa-on
is
the
amount
that
the
uncertainty
in
B
(or
A)
is
reduced
when
A
(or
B)
is
known.
– 2)
MI(A,B)
=
H(A)
+
H(B)
-‐
H(A,B)
• Maximizing
the
mutual
info
is
equivalent
to
minimizing
the
joint
entropy
(last
term)
• Advantage
in
using
mutual
info
over
joint
entropy
is
it
includes
the
individual
input’s
entropy
• Works
beZer
than
simply
joint
entropy
in
regions
of
image
background
(low
contrast)
where
there
will
be
low
joint
entropy
but
this
is
offset
by
low
individual
entropies
as
well
so
the
overall
mutual
informa-on
will
be
low
14. Defini-ons
of
Mutual
Informa-on
II
– 3)
€
⎛ p(a,b) ⎞
I(A,B) = ∑ p(a,b) ⋅ log⎜
⎟
p(a) p(b) ⎠
⎝
a,b
• This
defini-on
is
related
to
the
Kullback-‐Leibler
distance
between
two
distribu-ons
• Measures
the
dependence
of
the
two
distribu-ons
• In
image
registra-on
I(A,B)
will
be
maximized
when
the
images
are
aligned
• In
feature
selec-on
choose
the
features
that
minimize
I(A,B)
to
ensure
they
are
not
related.