Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Recent advances of AI for medical imaging : Engineering perspectives

2 281 vues

Publié le

인공지능과 빅데이터 시대의 영상의학 심포지엄 발표자료
"Recent advances of AI for medical imaging : Engineering perspectives"

Publié dans : Ingénierie
  • GIVE HER A BIGGER PACKAGE THIS VALENTINE'S DAY ◆◆◆ https://bit.ly/30G1ZO1
       Répondre 
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Répondre 
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici

Recent advances of AI for medical imaging : Engineering perspectives

  1. 1. Deep learning and its application for radiologists Namkug Kim, PhD namkugkim@gmail.com Medical Imaging N Robotics Lab. http://mirl.ulsan.ac.kr Convergence Medicine/Radiology Biomedical Engineering Center Asan Medical Center/Univ. of Ulsan College of Medicine
  2. 2. Researches with Hyundai Heavy Industries Co. Ltd. LG Electronics Coreline Soft Inc. Osstem Implant CGBio VUNO Kakaobrain 이해상충 Stockholder Coreline Soft, Inc. AnyMedi Co-Founder Somansa Inc. Cybermed Inc. Clinical Imaging Solution, Inc AnyMedi, Inc. Selected Grants as PI 한국연구재단, NRF, South Korea 7T용 4D 자기공명유속영상을 이용한 심뇌혈관 질환의 in-vivo 유동 정량화 SW개발, 2016 4D flow MRI을 이용한 심혈관 질환의 in-vivo 유동 연구, 2015-7 자기공명분광영상 및 MRI의 통합 분석 소프트웨어 개발 산업부, KEIT, South Korea 의료영상 인공지능 과제, 2016-20 3DP 척추 맞춤형 임플란트, 2016-20 3D 프린터 기반 무치악 및 두개악안면결손 환자용 수복 보철물 제작, 재건 시스템 개발, 2015-9 근골격계 복구 수술 로봇 개발, 2012-7 영상중재시술 로봇시스템 개발, 2012-7 보건복지부, KHIDI, South Korea 영상 뇌졸중 예후 예측 및 치료방침 결정 시스템 개발, 2012-8 관동맥 관류 CT 의 자동 진단 프로그램을 활용한 허혈성 질환의 진단과 치료, 2013-6 산학협력 Hyundai Heavy Industry, Osstem Implant, S&G Biotech, Coreline soft, Midas IT, AnyMedi, Hitachi Medical, Japan,
  3. 3. 5 MBC 다큐 스페셜 ‘미래인간 AI’, 2016.12.05 영상유도 중재 수술로봇 ; 현대중공업 폐영상 분석 SW ; 코어라인 소프트 3D 프린터 응용 ; 서울아산병원 Movie Clips
  4. 4. Major Breakthroughs in Feedforward NN K. Fukushima Yann Lecun G. Hinton, S. Ruslan Neocognitron Neocognitron (1979) • By Kunihiko Fukushima • First proposed CNN Convolutional Neural Networks (1989) • Yann Lecun et.al • Back propagation for CNN • Theoretically learn any function LeNet-5 architecture Alex krizhevsky , Hinton LeNet-5 (1998) • Convolutional networks Improved by Yann Lecun et.al • Classify handwritten digits D. Rumelhart, G. Hinton, R. Wiliams 1960 1970 1980 1990 2000 2010 2012 Perceptron XOR Problem Golden Age 1957 1969 1986 F. Rosenblatt M. Minsky, S. Papert • Adjustable weights • Weights are not learned • XOR problem is not linearly separble • Solution to nonlinearity separable problems • Big computation, local optima and overfiting CNN Breakthrough (2012) • By Alex Krizhevsky et al. • Winner of ILSVRC2012 by large marginDark Age (AI winter) Back propagation (1981) • Train multiple layers Multi-layer Perceptron (1986)
  5. 5. Why Deep Learning ? From MS speech group Speech Recognition 6.66% 5.98% 5.10% 4.94% 4.80% 0.00% 1.00% 2.00% 3.00% 4.00% 5.00% 6.00% 7.00% Google 2014 Baidu 2015 Human Level MS 2015 Google 2015 Object (Image) Recognition (ImageNet) Face Verification Gene Network Structure Inference 97.35% 97.45% 97.53% 98.52% 99.15% 99.47% 99.63% 96.00% 96.50% 97.00% 97.50% 98.00% 98.50% 99.00% 99.50% 100.00% ←Using DL
  6. 6. Major Components in Deep Learning Breakthroughs Algorithms Parallel computing Big Data Unsupervised pre-training Supervised training for deeper models NVIDIA® CUDA® 코어 5760 메모리 클럭 7.0 Gbp 표준 메모리 설정 12288 MB
  7. 7. Where do we use Deep Learning ? Auto drive (pedestrian/traffic sign recognition) • Netflix movie recommendation • Language translation • Breast cancer detection • Skype translator • Gesture/pose detection* *Neverova, Natalia, et al. "Hand Pose Estimation through Weakly-Supervised Learning of a Rich Intermediate Representation." arXiv preprint arXiv:1511.06728 (2015)
  8. 8. 영상 인식 Image tagging, retrievalObject recognition Scene segmentation
  9. 9. 비디오 인식 Video understanding (Google, 2014) Scene parsing (NYU/Facebook , 2014) NVIDIA DRIVE PX, 2015
  10. 10. 음성 및 번역 Speech Recognition Machine Translation Speech Recognition + Machine Translation
  11. 11. 영상 해석/자막 생성 Image Caption Generation Video Caption Generation
  12. 12. 질의 문답 Image Question Answering Speech Recognition + Image Questing AnsweringVideo Question Answering Text Question Answering
  13. 13. 생성 모델 (아트) Artistic neural style
  14. 14. 생성 모델 (풍경) Eyescream project in Facebook AI Research (using Laplacian pyramid Generative adversarial network; LAPGAN) http://soumith.ch/eyescream/
  15. 15. 생성 모델 (얼굴, 거실) Deep Convolutional Generative Adversarial Networks (DCGAN) Rotations are linear in latent space Bedroom generation Arithmetic on faces
  16. 16. 잡기 로봇화 (구글)
  17. 17. Comparison btw Brain and NN 36
  18. 18. Comparison btw Brain and NN 37 1. 10 billion neurons 2. 60 trillion synapses 3. Distributed processing 4. Nonlinear processing 5. Parallel processing 6. Efficiency (20~25W, 하루섭취량의 20~25%) 1. Faster than neuron (10-9 sec) cf. neuron: 10-3 sec 3. Central processing 4. Arithmetic operation (linearity) 5. Relatively Sequential processing 6. Efficiency (Titan X : 250W) cf. 1[kcal] = 1.16[Wh], 1W=1J=1Nm/s, 1cal=4.2J=1.163mWh Brain Computer
  19. 19. Bio Plausible Neural Network Mimic human visual recognition system Each unit connected to a small subset of other units Based on what it sees, it decides what it wants to say Units must learn to cooperate to accomplish the task 38
  20. 20. Hierarchical Representations pixels edges object parts (combination of edges) object models H. Lee, R. Grosse, R. Ranganath, A. Y. Ng, “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierachical Representations," in ICML 2009.
  21. 21. From Shallow to Deep Learning
  22. 22. From Shallow to Deep Learning Shallow learning SVM Linear & Kernel Regression Hidden Markov Models (HMM) Gaussian Mixture Models (GMM) Single hidden layer MLP Artificial Neural Net (ANN) ... Limited modeling capability of concepts Cannot make use of unlabeled data
  23. 23. 특징 추출 42
  24. 24. Neural Networks • Machine Learning • Knowledge from high dimensional data • Classification • Input: features of data • supervised vs unsupervised • labeled data • Neurons
  25. 25. Neural Networks (NN); 반복적인 에러교정 Forward propagation Sum inputs, produce activation, feed-forward Input 𝑋 Hidden 𝑊1 𝑊2 Neuron y 𝑥1 𝑥2 𝑥3 𝑥 𝑛−1 𝑥 𝑛 … Inputs Output 𝑧 = 𝑏 + 𝑖 𝑥𝑖 𝑤𝑖 𝑦 = 𝐻(𝑧) Output 𝑌 =f(x) Scaling function Activation function Activation function Information Propagation Error BackPropagation Output Comparison Weights
  26. 26. Simple Perceptron 45
  27. 27. Simple Perceptron 46
  28. 28. Simple Perceptron; Non-linear 47
  29. 29. Need for Multiple Units and Multiple Layers Multiple boundaries are needed (e.g. XOR problem) -> Multiple Units More complex regions are needed (e.g. Polygons) -> Multiple Layers 48
  30. 30. From Shallow to Deep Learning 49
  31. 31. Best Practice Normalization Prevent very high weights, Oscillation Overfitting/Generalisation Validation Set, Early Stopping Mini-Batch Learning update weights with multiple input vectors combined
  32. 32. Challenges in Training Feedforward NN Training multiple layers Vanishing gradient problem Gradients are diluted as layers go deep Only use labeled data most data is unlabeled Stuck in local minima Over-fitting Too many hyper parameters
  33. 33. Problems with Backpropagation Limitations Get stuck in local optima start weights from random positions Error attenuation, long fruitless training Slow convergence to optimum large training set needed Only use labeled data most data is unlabeled Backpropagation (BP) barely changes lower-layer parameters (vanishing gradient) Therefore, deep networks cannot be fully (effectively) trained with backpropagation 52
  34. 34. Breakthrough with Backpropagation Breakthrough Recent – Long patient training with GPUs and special hardware Deep belief networks (unsupervised pre-training) Convolutional neural networks (reducing redundant parameters) Rectified linear unit (constant gradient propagation) 53
  35. 35. Rectified Linear Units More efficient gradient propagation, derivative is 0 or constant, just fold into learning rate More efficient computation: Only comparison, addition and mul tiplication. Leaky ReLU f(x) = x if > 0 else ax where 0 ≤ a <= 1, so that derivat e is not 0 and can do some learning for that case. Lots of other variations Sparse activation: For example, in a randomly initialized network s, only about 50% of hidden units are activated (having a non-z ero output) CS 678 – Deep Learning 54
  36. 36. Convolutional Neural Networks (CNN) A type of feed-forward neural network Inspired by biological process Weight sharing (convolution) + Subsampling (pooling) Reducing the number of parameters (Reduce over- fitting) Translation invariance Input 28 × 28 Feature maps 4@24 × 24 Feature maps 4@8 × 8 Feature maps 8@4 × 4 Feature maps 8@2 × 2 Feature maps 8 ⋅ 2 ⋅ 2 × 1 Output 10 × 1 Convolution layer Max-pooling layer Convolution layer Max-pooling layer Reshape Linear layer [LeCun, 1998]
  37. 37. Convolution and pooling 57
  38. 38. Convolutional Neural Networks (CNN) Convolution과 Pooling (Subsampling)을 반복하여 상위 Feature 를 구성 Convolution은 Local영역에서의 특정 Feature를 얻는 과정 Pooling은 Dimension을 줄이면서도, Translation-invariant한 Feature를 얻는 과정
  39. 39. Convolutional Neural Networks (CNN) Neural network with sparse connections Learning algorithm: Backpropagation on convolution layers and fully-connected layers
  40. 40. Behavior of CNN 60
  41. 41. Visualization of FilterBank VGG16 architecture ImageNet most filters identical, rotated by some non-random factor (typically 90 degrees). – rotation-invariant. The rotation observation holds true in block4_conv1. Textures similar to that found in the objects in block5_conv2 61
  42. 42. Recurrent Neural Networks (RNNs)
  43. 43. 26.2% 16.4% 13.5% 12.9% 11.8% 7.3% 6.7% 4.9% 4.8% 3.6% Breakthroughs in CNN 2012 2013 2014 2015 Alexnet (2012) • 1st place in 2012 • 5 conv layers + 3 fully connected layers • Dropout & ReLU ImageNet Large Scale Visual Recognition Challenge Results SIFT + FVs (2012) • 2nd place in 2012 • SIFT + fisher vectors • No CNNs ReLU (Rectified Linear Units) Alexnet architecture Data augmentation (flip, random crop) Dropout
  44. 44. 26.2% 16.4% 13.5% 12.9% 11.8% 7.3% 6.7% 4.9% 4.8% 3.6% Recent Breakthroughs in CNN 2012 2013 2014 2015 ZF Net (2013) • 3rd place in 2013 • By Matthiew Zeiler & Rob Fergus • Variant of Alexnet Alexnet (2012) • 1st place in 2012 • 5 conv layers + 3 fully connected layers • Dropout & ReLU Clarifai (2013) • 1st place in 2013 • Deep Learning startup • Founded by Matthiew Zeiler • Variant of Alexnet VGG Networks (2014) • 2nd place in 2014 • By Oxford computer vision group • 19 layers deep GoogLeNet (2014) • 1st place in 2014 • 24 layers of convolution • Memory efficient network ImageNet Large Scale Visual Recognition Challenge Results SIFT + FVs (2012) • 2nd place in 2012 • SIFT + fisher vectors • No CNNs Overfeat (2013) • 2nd place in 2013 • By NYU • Variant of Alexnet GoogLeNet architecture
  45. 45. 26.2% 16.4% 13.5% 12.9% 11.8% 7.3% 6.7% 4.9% 4.8% 3.6% Recent Breakthroughs in CNN 2012 2013 2014 2015 ZF Net (2013) • 3rd place in 2013 • By Matthiew Zeiler & Rob Fergus • Variant of Alexnet Alexnet (2012) • 1st place in 2012 • 5 conv layers + 3 fully connected layers • Dropout & ReLU Clarifai (2013) • 1st place in 2013 • Deep Learning startup • Founded by Matthiew Zeiler • Variant of Alexnet VGG Networks (2014) • 2nd place in 2014 • By Oxford computer vision group • 19 layers deep GoogLeNet (2014) • 1st place in 2014 • 24 layers of convolution • Memory efficient network Batch normalization (2015) • By Google • Simple but powerful normalization algorithm Parametric ReLU (2015) • By Facebook ImageNet Large Scale Visual Recognition Challenge Results SIFT + FVs (2012) • 2nd place in 2012 • SIFT + fisher vectors • No CNNs Overfeat (2013) • 2nd place in 2013 • By NYU • Variant of Alexnet Parametric ReLU Residual Learning (add skip connections) Deep Residual Network (2016) • Winner of ILSVRC 2015 • By MSRA • More than 100 layers deep with skip connections
  46. 46. 제안서 page 5,6 69/ 61
  47. 47. 제안서 page 1-6 70/ 61
  48. 48. 동영상 추가해야 함 제안서 page 61,62 71/6 1
  49. 49. 제안서 page 8,93 72/6 1
  50. 50. 제안서 page 6,101 73/6 1
  51. 51. Chest X-ray 75 Data cleansing SW with AI Gold Standard (Generate 1000 data per class) Manual Drawing SW Nodule Interstitial Opacity Consolidation Pleural Effusion
  52. 52. Anonymization 77 Data 수집 - 정상 1,821,455 - 비정상 287,626 DICOM header Meta inf. Chest (AP,Lateral) Study description filtering Anonymizer 1. Batch conversion 2. 익명화 1. DICOM patient info 0008 tag 2. PatientID, SEX, AGE, NAME, Birth date, etc 3. Research ID generation 3. Efficient Error Handling MIRL@AMC ananoymizer Anonymizer@MIRL*
  53. 53. Cleansing Data Description - # 9,589 from 500,000 - Resize into 100x100 - Standardization CNN Training Data (3,000/3,000 ) Validation Data (1,000/1,000) Test Data (1000/589) Normal Class Abnormal Class Model Precision Sensitivity Specificity CNN 99.3% 99.8% 98.4% CNNBN 100% 99.9% 100% H. C. Shin,“Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning, “IEEE Transactions on Medical Imaging, 2016, pp. 1285-1298.
  54. 54. Chest X-ray CAM Weakly Supervised Learning + Class Activation Map - Resnet50 ILSVRC Pre-trained Model
  55. 55. Results
  56. 56. Interesting Case 85 Normal Cardiomegaly Ground Truth Prediction 수술자국
  57. 57. Chest PA YOLO • Fine-Tuned You Only Look Once(YOLO) model • YOLO model-26 layers including 24 convolution layers and 2 fully connected layers • Only final layer training • Only 6 classes of abnormal lesion, our last layer requires C = 6
  58. 58. Predicted Con. True Con. 1 0 1 164 56 0 50 941 1. nodule 2. consolidation 3. interstitial opacity 4. cardiomegaly 5. pleural effusion 6. pneumothorax Predicted Con. True Con. 1 0 1 123 41 0 32 982 Predicted Con. True Con. 1 0 1 82 14 0 25 1023 Predicted Con. True Con. 1 0 1 414 9 0 80 691 Predicted Con. True Con. 1 0 1 267 32 0 43 838 Predicted Con. True Con. 1 0 1 55 19 0 22 1050 Results
  59. 59. • Nodule result ROI 1. Two nodules are detected in in Chest X-ray image 2. Nodule are only detected in Chest X-ray image • Consolidation result ROI 1. Consolidation and pleural effusion are simultaneously detected in Chest X-ray image 2. Consolidation is only detected in Chest X-ray image ① ② ① ② Examples
  60. 60. • Interstitial Opacity and Cardiomegaly result ROI 1. Two interstitial opacity are detected in Chest X-ray image. 2. Cardiomegaly is detected in Chest X-ray image . 3. Cardiomegaly and two pleural effusion are detected in Chest X-ray image. ②① ③ Examples
  61. 61. • Pleural Effusion result ROI 1. Cardiomegaly and Pleural Effusion are detected in Chest X-ray image • Pneumothorax result ROI 1. Pneumothorax are detected in Chest X-ray image 2. Pneumothorax and pleural effusion are simultaneously detected in Chest X-ray image ① ②① Examples
  62. 62. AI in Radiology CAD for COPD Compare various classifiers for OLD classification Lee Y., et al, CMPB, 2009. 93(2): p. 206-15. Adding shape features for accuracy enhancement of emphysema quantification., J Digit Imaging, 22:136-148, 2009, SPIE 2007 Honorable Mention Poster Award / JDI Best paper 2009 Compare texture-based quantification vs density-based quantification for emphysema quantification, Investigative Radiology 43:395-402 , 2008 CAD for DILD Study feasibility for DILD, Korean J Radiol 10:455-463, 2009 Context sensitive SVM for whole lung quantification, IFMIA 2011, IWPFI 2011, JDI 2011 CrossVendor study for ILD (GE vs Siemens), Kim N , et al, RSNA 2010, Med Phys 2013 Texture based segmentation for iodine quantification in DECT of ILD, RSNA 2013, ER 2015 3D extension of texture analysis in DILD, RSNA 2014 CADD for DILD with regional parenchyma classification, WIP in Med Phys Deep learning for DILD, WIP in JDI DILD lung segmentation using deep learning, WIP in Med Phys Noise reduction of HRCT using auto-encoder, WIP in Med Phys Ensemble method for ILD classifier, JDI accepted CAD for AVUS (Ultrasounds), Thyroid Nodule (Med Phys), Lymph node meta (Acta Radiol), Pulmonary Embolism (Eur Radiol), …
  63. 63. Semantic Segmentation on DILD HRCT 92
  64. 64. Image Pattern Classification on DILD HRCT (a) Consolidation (b) Emphysema (c) Normal (d) Ground-glass opacity (e) Honeycombing (f) Reticular opacity DEEP Learning Training dataset Testing set Accuracy (%) Ensemble Classifiers GE GE 96.06 ± 1.61 Siemens Siemens 96.11 ± 1.19 GE+Siemens GE+Siemens 85.12 ± 1.91 +5~10% improvement (breakthrough in 5 years!) O = GE; X = SiemensSVM CNN
  65. 65. 3D DILD 2 Class : Normal Raw Data ROI Extraction 3D 3 Channel 2D 2D
  66. 66. Results 2D (82.7%) • 3 X 2D (81.7%) • 3D (84.7%) • 비교를 위해 augmentation 적용 안됨. 따라서 3D의 경우 variation이 큼. (많은 parameter)
  67. 67. CNN Airway Segmentation 80 COPD Patients’ Inspiration CT 69 CT volumes are included in training 11 CT volumes are NOT included in training GS : Manual segmentation 107 Axial 3 slices, Sagittal 3 slices, Coronal 3 slices 32 x 32 x 3 x 3 Weights are shared 2-class classification CT Volume Probabili ty Volume CNN For each voxels inside lungs Segment ed Airway Hard thresholding (0.51) and Select the connected component
  68. 68. CNN Airway Segmentation Initial Manual
  69. 69. 내시경 영상 판별 (대장) NICEI실제 클래스 예측 클래스 클래스별 확률 Patch Image 결과 Whole Image 결과
  70. 70. 결과 검토 (-) (+) (0) *0의 위치는 다를 수 있음. 정답 NICEIII NICEI CAM NICEII CAM NICEIII CAM 예측 NICEIII 이상 영역 marking
  71. 71. 112 CBIR for Medical Images
  72. 72. 113 The Previous Research for Quantification Definition of Similar Lung Images Extraction of Distribution Features Extraction of Distribution Features * https://en.wikipedia.org/wiki/Large_margin_nearest_neighbor * Y.J.Chang, et al,. “A support vector machine classifier reduces interscanner variation in the HRCT classification of regional disease pattern in diffuse lung disease: Comparison to a Baysian classifier”, Medical Physics 40 (5), 051912 (2013)
  73. 73. 114 Evaluation: Statistics ( * p-value < 0.01, ** p-value < 0.001 ) Evaluation: Recall Accuracy Fitting Test Data Acknowledgement This work was supported by the Industrial Strategic technology development program (10072064, Development of Novel Artificial Intelligence Technologies To Assist Imaging Diagnosis of Pulmonary, Hepatic, and Cardiac Diseases and Their Integration into Commercial Clinical PACS Platforms) funded by the Ministry of Trade Industry and Energy (MI, Korea)
  74. 74. CHALLENGES & DISCUSSIONS 115
  75. 75. 116 AP 통신 : 인공지능(로봇)이 기사 작성, 기술 공개 초당 2,000개 기사 작성 가능 기존의 300개 기업 실적 -> 3000개 기업 실적 커버
  76. 76. 117 Right now, about 80% of Americans who need a lawyer can't afford one "With ROSS, lawyers can scale their abilities and start to service this very large untapped market of Americans in need,"
  77. 77. GOLDMAN SACHS 118
  78. 78. 기계파괴운동 (1811-7) 119
  79. 79. 정밀의료(Precision Medicine) Initiative “And that’s why we’re here today. Because something called precision medicine … gives us one of the greatest opportunities for new medical breakthroughs that we have ever seen.” President Barack Obama January 30, 2015 정밀의료 치료기기 핵심 : 빅데이터 + 인공지능 빅데이터 ▶ 인공지능 ▶ 환자 맞춤형 진단/치료/공공 120/27
  80. 80. 의료 빅데이터 + 인공지능 의료 빅데이터를 이용하는 정밀의료 실현 사물인터넷 유전자검사 의료영상 환자 모니터링 하루 37만건의 의무기록 (연 간 약 1TB) 연간 약 2백만 영상데이터 (30TB) 3.8×109 염기쌍을 가진 DNA 정보 대형 종합 병원
  81. 81. 의료비 절감의 필요성 의료효율성 국내현황 Efficacy (효용성) Cost (비용) Equation = Efficacy / Cost • 정밀치료를 통한 효용성 극대화 • 효율적 의료시스템 구축을 통한 비 용 감소 • 고령화 사회 등으로 GDP대비 의료비 비율 빠르게 증가하고 있으며 상승폭 또한 커지고 있 음 • 국민 소득 및 소비수준을 고려하여 의료비는 변화에 반비례하여 감소되어야 할 필요성 존재
  82. 82. 인공지능 관련 국내외 시장 현황 사회 및 산업 각 분야시장의 핵으 로 부상중인 인공지능 2020년까지 연평균 53.65% 성장 률을 기록할 것으로 예상1) 국내의 인공지능 시장 규모 예상 2020년 2조2천억원 2025년 11조원 2030년 27.5조원2) 1) 2016 market and market 2) KT경제경영연구소 123
  83. 83. Global Trends Drive Momentum in health care industry Data Explosion 150+ exabytes Amount of healthcare data today1 Over 230K Active clinical trials2 80% Healthcare data that comes from unstructured data sources3 Dynamic Delivery Environment 50% Expected alternative payments form the Centers of Medicare and Medicaid by 2018 4 75%+ Percentage of patients expected to use digital health services in the future5 90K Expected shortage of physicians by 2020 6 Value vs Volume 4.7 trillion Estimated global economic impact of chronic disease by 2030 7 3 trillion Estimated US healthcare spending 8 100’s Approx. amount of decisions a person living with Type 1 Diabetes makes a day9 Efficient and effective R&D 1 in 10 Clinical trials in cancer that are shut down due to lack of participation10 2.6B Average costs to develop a new pharma drug11 <10% Amount of drug currently in development that make it market12 124 1: NCBI: big data analytics in healthcare: promise and potential 2: ClinicalTrials.gov, 3: NIH 4 CMS 5: McKinsey Healthcare's Digital Future July 2014, 6: AAMC Report The complexities from 2014 ro 2025 7: WEF global economic burden non-communication diseases 8: Health affaires. Team analysis 9: OpenAps.org 10: Bio-clinical development success rates 20, Health Economic volume 47, May 2016 1. Life expectancy data, WHO, 2012 2. 2015 Global life sciences outlook: adapting in an era of transformation, Deloitte DTTL, 2014 3. Informa Pic Market Line Extracted Oct 2014
  84. 84. Opportunity 126 8 trillion : Industry Size 2 trillion : waste in industry Better experience Imaging : Unnecessary tests Lower cost Oncology: Variability of Care Better outcomes Life sciences: Failed clinical trials Government: Fraud, Waste and Abuse Value Based Care: Cost of chronic disease 360 billion : total IT and healthcare market opportunity *IBM Watson
  85. 85. 국제적 동향 인공지능 의료적용 현황 (해외) 1) marketandmarkets 2016.02 세계 시장 규모는 5.05조원 (2020년 예측치), 향후 5년 성장률 53.65%으로 新성장 사업분야1) 의료 문서, 영상 빅 데이터를 활용한 진단 및 처방이 가능한 기술을 개발 헬스케어 분야부터 정밀의료 산 업까지 폭넓은 기술 로드맵 보유 폐암진단을 위한 딥러닝 기반 분석 시스템 131
  86. 86. 133
  87. 87. AI in healthcare 134 2011년부터 해외 인공지능 기반 의료 스타트 업 기업들은 8억7천만 달러(1조 원) 규모의 투자를 유치
  88. 88. 인공지능 의료적용 현황 (국내) 국내 동향 • 삼성과 LG를 비롯한 IT 기업의 높은 인공지능 분야 기술력 • 세계적 수준의 임상의료 및 임상시험 기술 • 최근 뷰노코리아, 루닛 등 인공지능기반 의료분야 스타트업들이 조기성과를 보이는 등 의료분야 인공지능 산업 기반 확대 미만성폐질환에서 CT로부터 질병을 판별. 질병 판독. 질병의 진행상황, 치료법 등 전문의 판단 을 도움을 주는 시스템 개발 소아 골연령 판정 보조 시스템 개발 흉부단순촬영 등 영상 데이터를 딥러닝 기술로 학습, 결핵, 유방암등 검출하는 기술을 개발 시스템생물학에 인공지능을 결합시 켜 기존 약물 개발 과정을 개선 135
  89. 89. 인공지능 의료적용 분야 인공지능 분야 시각지능 언어지능 판단지능 자동분류 요약/창작 공간지능 임상시험 케이스선정 신약개발프로세스 진료보조 비서서비스 음성인식 의무기록 데이터기반 정밀의료 유전체분석 약혼합사용 및 합병증 예측 진단검사추천 판독보조 정상유무판정 유사증례검색 판독문 생성 병리분야 판독 보조 물류, 수술실, 병실 운영 로봇수술 의료 인공지능 인공지능 의료적용 분야 136
  90. 90. 인공지능 의료적용 분야 (임상 중심 분류) 정상 유무 판정 초기진단 빅 데이터와 차세대 인공지능 기법을 통해 정밀판독 이 없이도 초기 감별진단 유사증례검색 DB의 수많은 증례로부터 유사 한 증례를 검색, 시각화하여 진 단에 도움 예비 판독문 생성 인공지능 기반 의료영상 분석기 법과 자연어 처리기술을 융합, 영상전문의의 판독을 보조할 수 있는 수준의 판독문 자동생성 병리분야 판독 보조 병리영상 빅데이터에 인공지능 기법의 적용, 진단, 발병기전 분 석, 예후 예측 등에 활용 임상시험 인공지능 기술을 기반으로 단일 병원 및 병원 클러스터간 물류, 수 술실, 병실의 효율적 운영 물류, 수술실, 병실 운영 로봇수술 인공지능 기술을 통해 의료 로봇 의 수술을 계획, 위험도 예측, 침 습부위 최소화 진료보조 데이터기반 정밀의료 판독보조 신약 약물 개발과정에 인공지능 기법을 적용, 질병 치 료에 더욱 효과적인 약물의 조합과 용도 변경 탐 색, 약물 후보군 및 임상환자 군의 최적화 케이스 선정 인공지능 기반 검색기법을 통해 적합한 질병 및 환자 를 탐색, 임상 시험의 준비 기간을 단축, 객관성 향상 비서서비스 IoT기술, 음성 인식기술 및 인공지능 기술을 융합, 효율적인 예약, 진단 및 진료 프로세스, 업무 정보 업데이트 및 맞춤형 큐레이션 음성인식 의무기록 진단 및 판독 내용의 기록을 자동화, 의료분야 전문용어를 판독 및 구조 화할 수 있는 수준의 음성 인식 및 문서 생성 기술 진단 검사 추천 정밀 진단 및 치료를 위해 인공지능 및 빅데이터를 활용, 정확도를 높이 고 위험도를 낮추기 위한 추가 진단 검사 프로세스를 추천 유전체 맞춤 의료를 위해 유전체, 멀티모달 의료 영 상 및 임상병리 빅데이터를 바탕으로 연관 성을 분석, 모델링하여 예후 예측, 진단 및 치료 약 혼합 사용 및 합병증 예측 치료 및 약물 사용 시 사례기반 위험성 또는 합병 증의 위험을 알려 주어 의사의 최종 결정을 보조 137
  91. 91. 인공지능 의료적용의 어려움과 대책 139 의료환경의 표준화의 어려움으로 인공지능 훈련을 위한 라벨링 어려움 개인정보보호에 따른 데이터의 비개방성 의료기기 인허가, 신의료기술인정 및 수가 장벽 인공지능 전문가와 의료전문가의 단절 의료빅데이터의 표준화와 연계 연구 필요 인공지능기술개발을 위한 공개 데이터 개발과 이를 이용한 그랜드 챌린지 방식의 기술 진흥 필요 연구기획 초기부터 관련 종합 대책 정책적 지원 방안 마련 인공지능 교육, 연구개발 산업화를 망라하는 생태계 구축 필요 어려움 대책
  92. 92. Strategy Proof of principal Google Retinopathy, JAMA 2016 – 130k images, 54 ophthalmologists – Reference standard – Severity, image quality, L/R, FoV Ophthalmology, Pathology, Radiology, Dermatology Cloud/Vision/Speech API AI make AI Amazon, MS Naver J Proj. Clova, 오감 AI Line chat bot since 2014 141
  93. 93. Technical Issues on Medical Imaging Data collection More (clean) data Accurate annotation Legal issues Models selection Deeper network Off the shelf models Result Interpretations Neural network visualization Human-friendly interpretation 142
  94. 94. Machine Operable, Human Readable Visual attention Category – feature mapping Sparsity and diversity 150
  95. 95. Machine Operable, Human Readable Evidence hotpot for lesion visualization “SpineNet: Automatically Pinpointing Classification Evidence in Spinal MRIs” 151
  96. 96. Machine Operable, Human Readable Visualization of salient region in bone x-ray 152
  97. 97. MD-friendly Interpretation Breast cancer risk prediction through BI-RADS categorization of mammography Analysis and visualization for breast density prediction Mapping and visualization of patient by predicted breast cancer risk score 153
  98. 98. MD-friendly Interpretation 154 Learning to read chest x-ray Automated annotation with attention and visual- sentence mapping
  99. 99. MD-friendly Interpretation 155 Contents-based case retrieval Suggest similar cases with the clinically matching context
  100. 100. 백문이 불여일타(百聞而不如一打) http://playground.tensorflow.org/ 165
  101. 101. 167
  102. 102. Collaborators • MIRL • Clinical Collaborators@Asan Medical Center, SNU BH – Radiology : Chest, Cardiac, Abdomen, Neuro/Brain – Neurology :Dongwha Kang, Chongsik Lee, Jaehong Lee, Sangbeom Jun, Misun Kwon, Beomjun Kim – Cardiology ; Jaekwan Song, Jongmin Song, Younghak Kim – Internal Medicine : Jeongsik Byeon – Pathology : Hyunhee Go – Surgery : Bumsuk Go, JongHun Jeong, Songchuk Kim • Academy • KAIST EE, CS, Math • Related Companies • LG, VUNO, Coreline Soft, MIDAS IT, KakaoBrain

×