3. Radiology historically has been a leader of digital
transformation in healthcare. The introduction of digital
imaging systems, picture archiving and communication
systems (PACS), and teleradiology transformed radiology
services over the past 30 years.
Radiology is again at the crossroad for the next generation of
transformation, possibly evolving as a one-stop integrated
diagnostic service
4. Since the 1970s,radiology has adopted many new digital
imaging modalities such as Computed Tomography
(CT),Magnetic Resonance Imaging (MRI), Positron Emission
Tomography (PET), Computed Radiography (CR), Single
Photon Emission Computed Tomography (SPECT), Digital
Ultrasound, Digital Mammography and many others.
These digital images were initially printed on films for
interpretation, sharing, and archiving.
As digital technologies for data capture, data storage, image
display, and transmission improved, radiology operations
began to convert to a filmless digital environment in the late
’90s
5. During the mid-’80s, the radiology community began to
explore computer aided diagnosis (CAD) as a possible to aid
radiologists .
Since the mid-2010s, there has an overwhelming interest in
machine learning techniques in almost all fields involving
data classification or analysis
6.
7.
8.
9.
10.
11. Multi-slice (volumetric) and multienergy
CT
Multi-parametric and multi-frame
(dynamic) MRI
Multi-dimensional (3D+time)
US
Multi-planar interventional imaging
Multi-modal (hybrid) PET/CT and PET/MRI
imaging technologies
TODAY
12.
13. achine. In 1936, after inventing this hypothetical computing device Turing returned back to this pro
14. ers and the oldest complete general purpose electronic computer. It was based on plans for a larg
15. Turing test: a computer passes the test if a human
interrogator, after posing a number of written questions,
cannot tell whether the written responses come from a
person or a computer
Smith test: data is provided to a computer to analyse in any
way it wants; the computer then reports the statistical
relationships it thinks may be useful for making predictions.
The computer passes the Smith test if a human panel
concurs that the relationships selected by the computer
make sense
25. “artificial intelligence” was first used in 1956
Arthur Samuel in 1959 to define a field of AI in which
computers learn automatically from data accumulation
Artificial Intelligence (AI) represents the capacity of
machines to mimic the cognitive functions of humans
(in this context, learning and problem solving).
26.
27. 1. “A robot may not injure a human being
or, through inaction, allow a human
being to come to harm.”
2. “A robot must obey the orders given it
by human
beings except where such orders would
conflict
with the First Law.”
3. “A robot must protect its own existence
as long as
such protection does not conflict with
the First or Second Laws.”
28.
29. First attempts focused on the so-called expert systems, with rule-based
reasoning, like the “if - else” statements in programming.
An illustrative example from 1984 is the case of US Campbell Soup
Company: Aldo Camino, an expert with 46 years of experience, knew
everything about the complex 22m high sterilizers, which heated 68,000
cans of soup to 120 ◦C. If it went wrong, a lot of soup was lost. Aldo knew
everything: “if this valve ticks, and the temperature there is too low, that
valve must be opened further,” etc. He flew from factory to factory, but
was about to retire. It was decided to record his full knowledge in a large
set of AI rules.
Later this form of AI got stuck; it turned out to be impossible to keep
discovering more rules and add them to the system.
These relatively simple networks actually disappointed, and at the end of
the 1990s this field was virtually given up.
32. Types of artificial intelligence
Artificial intelligence is classified into two main
categories: AI that’s based on functionality and AI
that’s based on capabilities.
Based on Functionality
• Reactive Machine – This AI has no memory
power and does not have the ability to learn from
past actions. IBM’s Deep Blue is in this category.
33. • Limited Theory – With the addition of memory,
this AI uses past information to make better
decisions. Common applications like GPS
location apps fall into this category.
• Theory of Mind – This AI is still being developed,
with the goal of its having a very deep
understanding of human minds.
• Self-Aware AI – This AI, which could understand
and evoke human emotions as well as have its
own, is still only hypothetical.
•
34. Based on Capabilities
• Artificial Narrow Intelligence (ANI) – A system that
performs narrowly defined programmed tasks. This
AI has a combination of reactive and limited memory.
Most of today’s AI applications are in this category.
• Artificial General Intelligence (AGI) – This AI is
capable of training, learning, understanding, and
performing like a human.
• Artificial Super Intelligence (ASI) – This AI performs
tasks better than humans due to its superior data
processing, memory, and decision-making abilities.
No real-world examples exist today.
35.
36. Machine Learning
A computer “learns” when its software is able to successfully
predict and react to unfolding scenarios based on previous
outcomes. Machine learning refers to the process by which
computers develop pattern recognition, or the ability to
continuously learn from and make predictions based on data,
and can make adjustments without being specifically
programmed to do so.
A form of artificial intelligence, machine learning effectively
automates the process of analytical model-building and allows
machines to adapt to new scenarios independently.
37. Machine learning
(ML) is a subfield of AI that
allows the machine to learn from
data
without being explicitly
programmed
38. The four steps for building a machine learning model
are:
1. Select and prepare a training data set necessary to
solving the problem. This data can be labeled or
unlabeled.
2. Choose an algorithm to run on the training data.
• If the data is labeled, the algorithm could be
regression, decision trees, or instance-based.
• If the data is unlabeled, the algorithm could be a
clustering algorithm, an association algorithm, or a
neural network.
3. Train the algorithm to create the model.
4. Use and improve the model.
39. There are three methods of machine learning: “Supervised”
learning works with labeled data and requires less training.
“Unsupervised” learning is used to classify unlabeled data
by identifying patterns and relationships. “Semi-supervised”
learning uses a small labeled data set to guide classification
of a larger unlabeled data set.
40. Deep Learning
Deep learning is a subset of machine learning that has
demonstrated significantly superior performance to some
traditional machine learning approaches. Deep learning
utilizes a combination of multi-layer artificial neural networks
and data- and compute-intensive training, inspired by our
latest understanding of human brain behavior. This approach
has become so effective it’s even begun to surpass human
abilities in many areas, such as image and speech
recognition and natural language processing.
Deep learning models process large amounts of data and
are typically unsupervised or semi-supervised.
41.
42. The
The concept of neural networks emerged from the
biologic neuron system
An artificial neural network (ANN) is composed of
interconnected artificial neurons. Each artificial neuron
implements a simple classifier model, which outputs a
decision signal based on a weighted sum of evidence, and
an activation function integrates signals from the neurons.
An ANN system can be built with thousands of these basic
computing units
43. The convolution neural network (CNN) consists of a series of
convolution layers equivalent to compositional convolution
layers with a set of large kernels.
In effect, a CNN acts as a feature learning based on spatial
features with multiple channels
52. The OsiriX project started in November 2003 when Antoine Rosset MD, a radiologist from Geneva,
Switzerland, received a grant from the Swiss National Fund to explore and learn about medical
digital imaging. At first, the goal of the OsiriX project was to simply write a small software program to
convert medical imaging DICOM files to a QuickTime movie file, in order to help a radiologist friend
to create a teaching files database. But soon, Antoine Rosset realized it could do much more.
In June 2004 the first version of OsiriX was released on Antoine Rosset’s personal homepage. At
that stage, it only offered a basic database and a simple medical images viewer, without post-
processing functions or measurement tools. But that was enough to get noticed: an article about the
OsiriX project was published in June 2004 in the Journal of Digital Imaging and became a reference.
But Antoine Rosset came back to the Geneva University Hospital in Switzerland in October 2004 to
continue his career as radiologist.
That’s when Joris Heuberger, a Math and Computer Sciences major from Geneva, joined the story
in March 2005.In June 2005, during Apple’s Worldwide Developer Conference (WWDC) in San
Francisco, OsiriX team received two prestigious Apple Design Awards: Best Use of Open Source
and Best Mac OS X Scientific Computing Solution It was already becoming the reference in medical
images viewer that was going to inspire many others, athough never equal to the original.
In 2009, OsiriX became the official DICOM viewer for the Radiology Department of the Geneva
University Hospital thanks to the support of Professor Ratib who had returned to Geneva a few
years earlier as Chairman of the Nuclear Medicine service
64. Lung Nodule
Analysis
Detect And Monitor Nodule As Small As 3mm, Ensuring Early Detection
Of Lung
1. Detects All Nodules From3-30mm
2. Auto-Estimates Measurements And Features
3. Estimates Calcification % Of The Nodule
4. Visual Representation Of Nodule Distribution With
In Lungs
5. Fleischner Guidelines For Follow-Up And
6. Management
65. Pulmonary
Fibrosis
Analysis
Identify fibrosis patterns with quantification for accurate
diagnosis and monitoring
1. Recognizes Ground-Glass Opacities,
Consolidation, Honeycombing And Vessel
Volumes
2. Lobe-Wise Quantification Of Patterns
66. Pulmonary
Fibrosis
Distribution
Identify fibrosis patterns with quantification for accurate
diagnosis and monitoring
1. Distribution Maps For Upper Vs Lower
For Whole, Right And Left Lungs
2. Distribution Maps For Central Vs
Peripheral Regions
67. COVID19
Severity Analysis
Estimate COVID19 Severity Score with Lobe-Wise And
Pattern-Wise break-Up
1. Estimate Involvement Of Ground-Glass
2. Opacities And Consolidation In Lungs
3. Quantification Of Lobe-Wise Involvement
Of Disease
4. Auto-Calculation Of COVID Severity Score
According To The 25-Point Scoring
System
68. Emphysema
Analysis
Quantify Emphysema Involvement In The Lungs, Helping Monitor
Disease Severity
1. Lobe-Wise Emphysema Involvement
2. Emphysema Recognition Using -950HU As
Threshold
3. Lung And Lobe Volumetry
85. IBM Watson is one of the pioneers in healthcare
applications powered by Artificial Intelligence. IBM has a
significant capability to deliver successful solutions in a
number of AI use cases.
IBM Watson Health one of the pioneers in healthcare
applications powered by Artificial Intelligence. IBM aims fast
processing medical images and to interpret the data
efficiently with information from various databases.
86. Butterfly Network: Butterfly aims to bring a different
perspective on medical imaging with both hardware and
software solutions. Butterfly IQ is a portable mobile device
that uses ultrasound-on-chip technology which makes it the
world’s first handheld entire body ultrasound framework. The
device also has the capability of detecting diseases in real-
time while scanning.
87. Arterys built the first tech product to visualize & quantify
blood flow in the body using any MRI. Arterys also received
the first FDA approval for clinical cloud-based deep learning
in healthcare.
Arterys, a pioneer in four-dimensional (4D) cloud-based
imaging, . Arterys’ Lung-AI platform helps to reduce missed
detections by 42 to 70%.
88. Gauss Surgical Inc. received CE (Conformité Européenne)
Mark for its Triton System for iPad, the world’s first and only
mobile platform for real-time monitoring of surgical blood
loss.
89. Zebra Medical Vision was one of Fortune’s “50 Companies
Leading the AI Revolution” in 2015. Moreover, Zebra Medical
Vision was selected as one of “The Most Innovative
Companies of 2017” by AI/Machine Learning Sector.
In 2019, Received FDA Approval for the world’s first AI chest
x-ray triage product.
90. Sigtuple‘s innovative solutions aim to solve the problems
caused by the chronic shortage of trained medical
practitioners in India.
91. Freenome raised 70.6M within only two years of its launch.
Freenome detects cancer by imaging blood cells. The
company raised $237.6M by July 2019.
92. Enlitic uses deep learning techniques to analyze the data
extracted from radiology images. A study suggests that
radiologists can read cases 21% faster with the help of
Enlitic.
93. Caption Health provides guidance to healthcare
professionals and inexperienced people to perform
ultrasound examinations accurately and quickly. it also
facilitates the work of healthcare professionals by providing
automatic quality assessment and smart interpretation
94. Behold.ai uses artificial intelligence technologies to help
radiologists diagnose radiology scans in a variety of cases.
behold.ai reduces the workload of medical professionals by
fastening the process of diagnosis.
95. Viz.ai late 2019 for detecting early signs of brain stroke.
February 2020, vit.ai released a new generation
synchronized care platform for those who are in the post-
acute care period. The platform sends a notification to
healthcare professionals when there is a sign of a serious
situation.
96. DiA Imaging has an AI-powered ultrasound image analysis
solution. dia.ai uses machine learning algorithms that
automatically detect image borders and identifies the motion
in different frames of ultrasound images
97. RetinAi‘s “Discovery Platform” helps to collect, organize and
analyze health data from the eye in order to detect age-
related macular degeneration (AMD), diabetic retinopathy
(DR), and glaucoma, etc
98. Subtle Medical: Subtle Medicals’ software improves the
quality of noisy medical images and provides better
interpretation. It is especially helpful for patients who have
difficulty holding still for long periods of time.
99. BrainMiner is a UK based company and Brainminers’
software DIADEM provides an automated system for
analyzing MR brain scans, to help the clinicians with an
easily interpreted report
100. Lunit has developed AI solutions for precision diagnostics
and therapeutics. The company aims to optimize diagnosis
and treatment matches by searching for the right diagnosis
at the right cost, and the right treatment for the right patients.
Lunit and GE Healthcare launched an AI-powered chest X-
ray analysis package designed to detect and highlight eight
common conditions, such as tuberculosis and pneumonia,
including those linked to COVID-19, using their algorithms
101. Quantifying a patient’s coronary artery calcification
an AI tool to help radiologists spot pneumothorax,
AI triage tool for intracranial hemorrhage,
algorithm that helps radiologists spot pleural effusion from chest X-
ray images
AI mammogram tool.
Currently, 46 AI algorithms have approvals from Food and Drug
Administrations and Conformité Européenne (CE)
103. Radiomics is the high-throughput extraction of quantitative
imaging features from a radiographic image
Radiomics refers to a set of techniques for extracting a
large number of quantitative features from medical images
and subsequently mining these features to retrieve
clinically useful diagnostic and prognostic information.
104. Morphologic features describe the size, volume, and shape
of the VOI, usually for tumors. Unlike a visual assessment
of tumor morphology by radiologists, morphologic features
are expressed as statistical values in radiomics
Morphologic Features
105. A histogram is a plot displaying the pixel frequency in
accordance with pixel values. Multiple features can be
calculated from a histogram, which describe the
magnitude(mean), dispersion (standard deviation),
asymmetry (skewness), peakedness or flatness (kurtosis),
randomness (entropy), uniformity (energy and uniformity),
and dispersion relative to the magnitude (coefficient of
variation) of gray-level pixel values. These histogram
features describe the distribution pattern of gray-level pixel
values within a VOI as a whole, but cannot address the
spatial relationshipamong pixels or the textural pattern
Histogram Features
106. Textural features are a key component of radiomics features
and describe the spatial relationship between each individual
pixel and its neighboring pixels. Two commonly used
matrices for textural analysis are the graylevel co-occurrence
matrix (GLCM) and the gray-level runlength matrix (GLRLM).
The GLCM is a matrix describing the frequency of two
neighboring pixels with certain graylevel pixel values, while
the GLRLM describes the length of a continuous pixel with a
certain gray-level pixel value
Textural Features
107. Higher-order features refer to textural features extracted
from filtered images. Various filters have been used to
emphasize the characteristics of images. A Gaussian filter
is a smoothing filter that reduces the sensitivity to image
noise. A Laplacian filter is an edge-enhancing filter. Since
the Laplacian filter enhances any rapid intensity changes
on an image, it may amplify image noise as well as edges
Higher-Order Features
108. Process of Radiomics Analysis
The radiomics analysis of medical images involves multiple
processes, including image preprocessing, segmentation,
feature extraction, feature selection, and classification.
Image preprocessing is an important step for achieving
valid and reproducible radiomics features
109. Artificial intelligence is capable of revolutionizing
healthcare industry by the expedited development
of personalized and automated diagnostics,
new diagnostic data-based methods, imagingguided
robot-assisted surgery, tele-monitoring of chronic
conditions, support of correct medical
decisions, as well as systematic monitoring of
potential diagnostic errors.
Expertise, wisdom,
human attitude, care, empathy, mutual understanding,
and support lie at the very base of
the medical profession and cannot be automated