Master's students use concepts from my (Jeff Funk) forthcoming book (Technology Change and the Rise of New Industries) to analyze the technical and economic feasibility of new human-computer interfaces (e.g., touch, gestures, voice, neural interfaces). See my other slides for details on concepts, methodology, and other new industries.
Insurers' journeys to build a mastery in the IoT usage
Human-Computer Interfaces: When will new ones become technically and economically feasible?
1. 1 The Future Of Human-Computer Input Interfaces Stephanie Budiman Johnny Cham Karthik Nandakumar Osbert Poniman Mauhay Mary Esther Samson
2. 2 Computing Anytime Anywhere! Motivation: In today’s world, easy access to information and computing is required anytime and anywhere
3.
4. Robust HCIs are needed to enable ubiquitous computingWe focus only on input interfaces in this presentation
5. 4 Technology Paradigms for Input Interfaces Batch Interfaces Graphical User Interfaces Command Line Interfaces Natural User Interfaces Neural Interfaces
10. 9 Microphone: Method of Improvement Microphone array can mitigate background noise & interference1 Corresponding decrease in Word Error Rate Improvement in SNR Noise suppression algorithms can increase SNR by 18dB with just 4 microphones in an array2 LOUD project from MIT Computer Science and Artificial Intelligence Laboratory, 2005 Microphone Array project in MSR: Approach and Results, Microsoft Research, June 2004
11. 10 Automated Speech Recognition (ASR) L. Rabiner, “Challenges in Speech Recognition”, NSF Symp. on Next Gen. ASR, 2003 “Increase in vocabulary sizes needs exponential increase in computing power due to potential combinatorial explosions”
12. 11 ASR Accuracy Improvements NIST Benchmark Test History – May 2009 WORD ERROR RATE (%) YEAR ASR accuracy is acceptable only in some niche applications
13.
14. Speech as an important modality in multimodal user interfaces (e.g., Microsoft Kinect) may be the future* http://blogs.msdn.com/b/sprague/archive/2004/10/22/246506.aspx
15.
16.
17. 15 Improving Throughput of Touch UI Interaction based on large multi-touch screens can increase the throughput of touch user interfaces B. Buxton, “Multi-Touch Systems”, Microsoft Research, 2007
19. 17 Comparison of Touch Technologies Source: Frost & Sullivan, Advances in Haptics & Touch Technology, 2010
20. 18 Future of Touch: Tangible Bits Manipulating digital objects through physical objects
21. 19 Gesture Interfaces Key Components 3D Camera (image sensor) Tracking, Recognition & Gesture Understanding Software Key values that need improvement are Accuracy, Throughput and Affordability
25. 22 Camera Technology Improvements Reducing pixel-size (green square) and improving sensitivity (Yellow circle) miniaturized cameras without reducing quality T. Suzuki, “Challenges of Image-Sensor Development”, ISSCC, 2010
26. 23 Camera Price Improvements Price per pixel for a 12 MP camera Year Number of pixels (resolution) has increased, while price per pixel has decreased K. Wiley, “Digital Photography”, www.keithwiley.com
27.
28. CMOS technology now has the required temporal resolution to enable production of affordable ToF 3D cameras
29. 3D cameras will improve the accuracy of gesture UIDepth Error/Distance Distance to image Cost-effective 3D image sensors are now becoming available (e.g., Microsoft Kinect ~ 150 USD) * R. Lange, “3D Time-of-flight distance measurement with custom solid-state image sensors in CMOS/CCD-technology”, PhD Thesis, 2000
30. 25 Neural Interfaces Key Component Brain scanning device Key values that need improvement are Accuracy, Throughput and Affordability Required improvements in brain scanning technology Accuracy – Higher spatial resolution Throughput – Higher temporal resolution Affordability – Size and better materials
31. 26 Key Brain Scanning Technologies Magneto Encephalo Graphy Electro Encephalo Graphy US$0.5M-3M US$2.4M US$2.9M US$250K SPECT EEG CT Scan MEG fMRI NIRS MRI PET 1972 1991 1983 1950 1968 1936 1975 1973 US$1M-1.5M US$180K- 250K US$5M-7M >US$30K Baranga, A. B.-A. (2010). "Brain's Magnetic Field: a Narrow Window to Brain's Activity". Electromagnetic field and the human body workshop, (pp. 3-4).
32. 27 Comparison of Technologies Neuron can fire ~0.1mm (spatial) & ~10 ms (temporal) Non-invasive Invasive Ideally, a non-invasive technology with high spatial resolution and high temporal resolution is required Additionally, the technology must be affordable and portable in order to be useful in HCI applications Gerven, M. v., et al., “The Brain-Computer Interface Cycle”, J. Neural Eng, 2009
33. 28 Spatial Resolution Improvement While spatial resolution is important for accuracy, high temporal resolution is critical for user interfaces R. Kurzweil, “The Singularity is Near”, 2005
34. 29 ElectroEncephaloGraphy (EEG) Non-invasive interface using electrodes to pick up brain signals Key limitation: Poor spatial resolution Increasing number of EEG electrodes may provide limited improvement in spatial resolution and higher SNR J. Malmivuo, “Comparison of the Properties of EEG and MEG”, Intl J of Bioelectromagnetism, 6 (1), 2004
35. 30 MagnetoEncephaloGraphy (MEG) fT Baranga, A. B.-A. (2010). "Brain's Magnetic Field: a Narrow Window to Brain's Activity". Electromagnetic field and the human body workshop, (pp. 12).
36. 31 MEG: Improvements in Millimeter-scale Atomic Magnetometer A: 2004 B: 2007 C: 2007 D: 2009 Target: ~100fT and <100Hz J. Kitching, et al., “Uncooled, Millimeter-Scale Atomic Magnetometers”, IEEE Sensors 2009 Conference (pp. 1844-1846)
37. 32 Further Scope for Improvement Hybrid technologies Enable high levels of accuracy in diagnosis that individual modalities cannot offer. E.g: MEG, EEG and MRI1 or PET/CT, PET/MRI2 Better shielding for noise reduction Low frequency noise: use flux-entrapment shields High frequency noise: use lossy magnetic shields based on induced eddy currents Reduce costs Open-source: OpenEEG [1] Baranga, A. B.-A. (2010). "Brain's Magnetic Field: a Narrow Window to Brain's Activity". Electromagnetic field and the human body workshop, (pp.15). [2] Stommen, J. (2011, Mar 25). "Superior capabilities boost hybrid imaging, says report“, Medical Device Daily
Touch screens, in particular, are becoming increasingly popular because of their ease and versatility of operation as well as their declining price. Touch screens can include a touch panel, which can be a clear panel with a touch-sensitive surface. The touch panel can be positioned in front of or integral with a display screen so that the touch-sensitive surface covers the viewable area of the display screen. Touch screens can allow a user to make selections and move a cursor by simply touching the display screen via a finger or stylus. In general, the touch screen can recognize the touch and position of the touch on the display screen, and the computing system can interpret the touch and thereafter perform an action based on the touch event. Touch screens typically include a touch panel, a controller and a software driver. The touch panel is characteristically an optically clear panel with a touch sensitive surface that is positioned in front of a display screen so that the touch sensitive surface is coextensive with a specified portion of the display screen's viewable area (most often, the entire display area). The touch panel registers touch events and sends signals indicative of these events to the controller. The controller processes these signals and sends the resulting data to the software driver. The software driver, in turn, translates the resulting data into events recognizable by the electronic system (e.g., finger movements and selections). Single TouchSingle Touch occurs when a finger or stylus creates a touch event on the surface of a touch sensor or within a touch field so it is detected by the touch controller and the application can determine the X,Y coordinates of the touch event. These technologies have been integrated into millions of devices and typically do not have the ability to detect or resolve more than a single touch point at a time as part of their standard configuration.Single Touch with Pen InputSingle Touch with Pen input functionality can range from a simple, inactive pointer or stylus to complex, active tethered pens. Inactive pens enable the same input characteristics as a finger, but with greater pointing accuracy, while sophisticated, active pens can provide more control and uses for the touch system with drawing and palm rejection capabilities, and mouse event capabilities.Single Touch with GestureEnhancements to firmware, software and hardware by many single touch technologies have increased their touch functionality. Some touch technologies can use advanced processing capabilities to "detect" or recognize that a second touch event is occurring, which is called a "gesture event." Since single touch systems can't resolve the exact location of the second touch event they rely on algorithms to interpret or anticipate the intended gesture event input. Common industry terms for this functionality are two-finger gestures, dual touch, dual control, and gesture touch.Two TouchTwo Touch refers to a touch system that can detect and resolve two discrete, simultaneous touch events. The best demonstration of Two Touch capability is to draw two parallel lines on the screen at the same time. Two Touch systems can also support gesturing.Multi-touchMulti-touch refers to a touch system's ability to simultaneously detect and resolve a minimum of 3+ touch points. All 3 or more touches are detected and fully resolved resulting in a dramatically improved touch experience. Multi-touch is considered by many to become a widely-used interface mainly because of the speed, efficiency and intuitiveness of the technology.
Touch screens, in particular, are becoming increasingly popular because of their ease and versatility of operation as well as their declining price. Touch screens can include a touch panel, which can be a clear panel with a touch-sensitive surface. The touch panel can be positioned in front of or integral with a display screen so that the touch-sensitive surface covers the viewable area of the display screen. Touch screens can allow a user to make selections and move a cursor by simply touching the display screen via a finger or stylus. In general, the touch screen can recognize the touch and position of the touch on the display screen, and the computing system can interpret the touch and thereafter perform an action based on the touch event. Touch screens typically include a touch panel, a controller and a software driver. The touch panel is characteristically an optically clear panel with a touch sensitive surface that is positioned in front of a display screen so that the touch sensitive surface is coextensive with a specified portion of the display screen's viewable area (most often, the entire display area). The touch panel registers touch events and sends signals indicative of these events to the controller. The controller processes these signals and sends the resulting data to the software driver. The software driver, in turn, translates the resulting data into events recognizable by the electronic system (e.g., finger movements and selections). Single TouchSingle Touch occurs when a finger or stylus creates a touch event on the surface of a touch sensor or within a touch field so it is detected by the touch controller and the application can determine the X,Y coordinates of the touch event. These technologies have been integrated into millions of devices and typically do not have the ability to detect or resolve more than a single touch point at a time as part of their standard configuration.Single Touch with Pen InputSingle Touch with Pen input functionality can range from a simple, inactive pointer or stylus to complex, active tethered pens. Inactive pens enable the same input characteristics as a finger, but with greater pointing accuracy, while sophisticated, active pens can provide more control and uses for the touch system with drawing and palm rejection capabilities, and mouse event capabilities.Single Touch with GestureEnhancements to firmware, software and hardware by many single touch technologies have increased their touch functionality. Some touch technologies can use advanced processing capabilities to "detect" or recognize that a second touch event is occurring, which is called a "gesture event." Since single touch systems can't resolve the exact location of the second touch event they rely on algorithms to interpret or anticipate the intended gesture event input. Common industry terms for this functionality are two-finger gestures, dual touch, dual control, and gesture touch.Two TouchTwo Touch refers to a touch system that can detect and resolve two discrete, simultaneous touch events. The best demonstration of Two Touch capability is to draw two parallel lines on the screen at the same time. Two Touch systems can also support gesturing.Multi-touchMulti-touch refers to a touch system's ability to simultaneously detect and resolve a minimum of 3+ touch points. All 3 or more touches are detected and fully resolved resulting in a dramatically improved touch experience. Multi-touch is considered by many to become a widely-used interface mainly because of the speed, efficiency and intuitiveness of the technology.
Touch Technology refers to technology that can detect and process touch signals.HowitWorksTouch screen basically consists of two layers. When the user presses at a point, the two layers come in contact and a signal is created.Different Touch TechnologyCapacitiveThe operation relies on the capacitance of the human body. When a person touches the screen, a small current flows to the point of touch, causing a voltage drop which is sensed at the 4 corners. Capacitive touch screens use the body of the user as a ground for a small electric current promulgated over the screen.ResistiveWhen a person presses on the top sheet, it is deformed and its conductive side comes in contact with the conductive side of the glass, effectively closing a circuit. The voltage at the point of contact is read from a wire connected to the top sheet.The term, "resistive" refers to the way the system registers the touch of the user. Because a resistive touch screen responds to the pressure of the touchInfraredInfrared scanning systems register a touch when a field of infrared beams is interrupted. Surface Acoustic WaveA surface-acoustic wave touch identifies a touch by the reduction of the acoustic signal at the point of contact on the screen.
All a multi-touch functionalitySOURCE:Resistive offers the flexibility of activation by multiple touch devices and an attractive price, but does not offer high clarity. Overall, it is an excellent offering for retailers seeking to implement touch while minimizing cost. Capacitive offers good image clarity and excellent durability at a moderate price but is limited to activation by contact with a human finger. In traditional retail environments this combination of benefits has been widely accepted. Surface wave acoustic accepts input from several devices and offers excellent clarity and durability, but is also more sensitive to the contaminants prevalent in a retail environment. Infrared allows input from multiple devices and offers minimal calibration, dirt and debris may cause it to register unintentional touches, and it is significantly slower to react to a touch than the other technologies.
As we achieve finer and finer resolutions from scanning techniques that don’t require getting inside the skull, we’ll increasingly be able to better understand the details of brain processes and how things happen at various structural levels.
Limitation: noise issueLab Environment NoiseElevator, lamps, general equipments, magnetometer movement, Geomagnetic fluctuations, etc.Human Body NoiseEye movements and blinks, cardiac activity, electric currents in muscles.Source localization in the brain might have multiple solution.
The SERF magnetometers are in the range of what is required for biomagnetic measurements.Need to have gd sensitivity to detect low signal MEG: vry2 low signal need to hv highly magnetic signal sensors, big shielding.We hv reached the target, need to reduce further