Wearables are embracing AI, transforming the way we live, act, learn and behave as social human beings. However, what’s at stake for this “PopSci rhetoric” to happen is nothing short of an enormous multifaceted challenge. In this talk, I will explore the system and algorithmic challenges in modelling behaviour in this augmented human era. In particular, I will discuss how an "Earable" can be used as a multi-sensory computational platform to learn and infer human behaviour and to design ultra-personal connected services.
Computational Behaviour Modelling for the Internet of ThingsFahim Kawsar
This document discusses computational behavior modeling for the Internet of Things. It begins by posing questions about the future of connected devices and apps. The author then discusses using sensors, learning, and actions to create useful IoT devices like a smart thermostat. However, many current IoT devices are dismissed as solving no real problems. The document outlines using wireless networks and wearables to understand human context and behavior through large datasets of user activity traces and applying machine learning. Challenges are discussed around privacy and participation guarantees. Applications include people analytics, productivity management, and space management to quantify enterprises. The future may include contextual automation through multi-modal sensing and deep learning on devices, networks, and infrastructure to better understand environments and activities.
IoT 3.0 : Connected Living in an Everything-Digital WorldFahim Kawsar
We are observing a monumental effort from the industry and academia to make everything connected. Naturally, to understand the needs of these connected things, we need a better understanding of humans and where, when, and how they interact. Then we can create digital services and capabilities that fundamentally change the way we experience our lives. Right now IoT is all about connectivity, and scale. Next generation IoT will be about learning and contextual automation. Designing intention- and behavior-aware services will be the principal source of differentiation, and competitive advantage for the industry players.
To this end, in this talk I explore how wearable devices and Wi-Fi network can be used as a sensing platform to understand you and the world around you and to design future consumer facing connected services.
This document discusses quantifying workplace experiences using sensors and data analytics. It describes how tracking face-to-face interactions, moods, and other metrics can provide insights about employee productivity, collaboration, and happiness. Sensors can measure factors like noise levels, light, and air quality in offices. Analyzing interaction patterns can predict team performance and discover emerging leaders. Quantified data from badges and mobile phones can help optimize workspace design and resource allocation while giving employees self-tracking tools and recommendations to improve collaboration. The document outlines several prototype applications and shares findings from pilot projects at Bell Labs locations.
UbiComp 2013 Talk on Device Dynamics at HomeFahim Kawsar
1. The paper discusses a study of internet usage patterns in 86 Belgian homes to understand how device usage has evolved over time based on location and activity.
2. The study found that computing has spread beyond traditional locations like desks and is now common in new areas like kitchens and bathrooms. Usage varies by location, time of day, and activity.
3. The findings provide insights into how user preferences and context drive device selection for different activities and locations in the home. This informs the design of future home computing applications.
Network Intelligence Driven Human Behavior ModelingFahim Kawsar
This document discusses using network intelligence and behavior modelling to enable a more sustainable connected world. It describes how analyzing large datasets of population trends, internet of things deployments, mobile network usage, and transit card usage can provide insights into human behavior and activity patterns. These insights can then be used to optimize resource usage, predict needs, and inform the development of more sustainable IoT and smart city solutions. Case studies describe analyzing mobile web data from Seoul to identify user activity diversity and periodicity, in-home internet traces from Belgium to segment households by behavior, and transit card data from London to map travel flows and identify functional areas of the city. The goal is to leverage these network sensing and modeling techniques to design technologies and systems that are
By 2050 the world’s urban population is expected to grow by 72%. This steep growth creates an unprecedented urge for understanding cities to enable planning for the future societal, economical and environmental well being of their citizens. The increasing deployments of Internet of Thing (IoT) technologies and the rise of so-called “Sensored Cities” are opening up new opportunities towards forming a profound understanding of the relationship of the citizens with their cities. Dr. Fahim Kawsar, the Director of the Internet of Things research at Bell Labs thinks that while silicon sensing helps us to know our cities quantitatively, it is also important to understand the city qualitatively by capturing the subjective metrics such as citizen well being, perception of safety, trust, cleanliness, friendliness, happiness, etc. In his talk he will explain how Bell labs’ research is addressing this aspect of urban landscape.
An exploration of how openly shared, passively collected, and persistent personal data is providing individuals with scaffolding and feedback for complex tasks. The resulting technosocial impact is a shifting of the locus of control from centralized management and production of resources, decentralizing towards individuals.
Computational Behaviour Modelling for the Internet of ThingsFahim Kawsar
This document discusses computational behavior modeling for the Internet of Things. It begins by posing questions about the future of connected devices and apps. The author then discusses using sensors, learning, and actions to create useful IoT devices like a smart thermostat. However, many current IoT devices are dismissed as solving no real problems. The document outlines using wireless networks and wearables to understand human context and behavior through large datasets of user activity traces and applying machine learning. Challenges are discussed around privacy and participation guarantees. Applications include people analytics, productivity management, and space management to quantify enterprises. The future may include contextual automation through multi-modal sensing and deep learning on devices, networks, and infrastructure to better understand environments and activities.
IoT 3.0 : Connected Living in an Everything-Digital WorldFahim Kawsar
We are observing a monumental effort from the industry and academia to make everything connected. Naturally, to understand the needs of these connected things, we need a better understanding of humans and where, when, and how they interact. Then we can create digital services and capabilities that fundamentally change the way we experience our lives. Right now IoT is all about connectivity, and scale. Next generation IoT will be about learning and contextual automation. Designing intention- and behavior-aware services will be the principal source of differentiation, and competitive advantage for the industry players.
To this end, in this talk I explore how wearable devices and Wi-Fi network can be used as a sensing platform to understand you and the world around you and to design future consumer facing connected services.
This document discusses quantifying workplace experiences using sensors and data analytics. It describes how tracking face-to-face interactions, moods, and other metrics can provide insights about employee productivity, collaboration, and happiness. Sensors can measure factors like noise levels, light, and air quality in offices. Analyzing interaction patterns can predict team performance and discover emerging leaders. Quantified data from badges and mobile phones can help optimize workspace design and resource allocation while giving employees self-tracking tools and recommendations to improve collaboration. The document outlines several prototype applications and shares findings from pilot projects at Bell Labs locations.
UbiComp 2013 Talk on Device Dynamics at HomeFahim Kawsar
1. The paper discusses a study of internet usage patterns in 86 Belgian homes to understand how device usage has evolved over time based on location and activity.
2. The study found that computing has spread beyond traditional locations like desks and is now common in new areas like kitchens and bathrooms. Usage varies by location, time of day, and activity.
3. The findings provide insights into how user preferences and context drive device selection for different activities and locations in the home. This informs the design of future home computing applications.
Network Intelligence Driven Human Behavior ModelingFahim Kawsar
This document discusses using network intelligence and behavior modelling to enable a more sustainable connected world. It describes how analyzing large datasets of population trends, internet of things deployments, mobile network usage, and transit card usage can provide insights into human behavior and activity patterns. These insights can then be used to optimize resource usage, predict needs, and inform the development of more sustainable IoT and smart city solutions. Case studies describe analyzing mobile web data from Seoul to identify user activity diversity and periodicity, in-home internet traces from Belgium to segment households by behavior, and transit card data from London to map travel flows and identify functional areas of the city. The goal is to leverage these network sensing and modeling techniques to design technologies and systems that are
By 2050 the world’s urban population is expected to grow by 72%. This steep growth creates an unprecedented urge for understanding cities to enable planning for the future societal, economical and environmental well being of their citizens. The increasing deployments of Internet of Thing (IoT) technologies and the rise of so-called “Sensored Cities” are opening up new opportunities towards forming a profound understanding of the relationship of the citizens with their cities. Dr. Fahim Kawsar, the Director of the Internet of Things research at Bell Labs thinks that while silicon sensing helps us to know our cities quantitatively, it is also important to understand the city qualitatively by capturing the subjective metrics such as citizen well being, perception of safety, trust, cleanliness, friendliness, happiness, etc. In his talk he will explain how Bell labs’ research is addressing this aspect of urban landscape.
An exploration of how openly shared, passively collected, and persistent personal data is providing individuals with scaffolding and feedback for complex tasks. The resulting technosocial impact is a shifting of the locus of control from centralized management and production of resources, decentralizing towards individuals.
Taking Qualitative Research to the Cloud - Ericsson ConsumerlabMerlien Institute
Presented by Jasmeet Sethi, Regional Head of ConsumerLab, Ericsson
at Qualitative360 Asia 2013
19-21 November 2013, Singapore
This event is proudly organised by Merlien Institute
Check out our upcoming events by visiting http://qual360.com/
Simon Nash, an engagement and experience expert, introduces the concept of what we mean by "digital psychology" and how Reading Room are incorporating this into our core consultancy offering.
Mind the Gap - A Pecha Kucha Presentation by Pravin ShekarNFN Labs
The document discusses using different survey methods like SMS, IVRS, application-based, and automated voice response for conducting surveys across generations in India. It notes that SMS surveys allow for anytime, anywhere surveys with quick turnaround but have limitations like size constraints. Application-based surveys address some SMS limitations but have software and hardware compatibility issues. Automated voice response surveys are effective for screeners, short surveys, and open-ended questions, keeping respondents engaged. The document advocates using a combination of both online and offline methods like government kiosks, WAP, and mobile clinics to conduct surveys in developing nations.
IRJET- Gesture Recognition using Sixth Sense TechnologyIRJET Journal
This document discusses gesture recognition using Sixth Sense technology. Sixth Sense is a wearable device that uses a camera to recognize hand gestures and a projector to display digital information in the real world. It bridges the gap between the physical and digital worlds by allowing users to access online information through natural hand gestures. The device consists of a camera, projector, mirror, and is connected wirelessly to a smartphone. The camera captures gestures and sends the data to the phone for processing. The processed data is then projected using a mini projector onto a mirror and surface in the real world, augmenting digital information. The technology aims to free digital information from screens and integrate it into the physical world through gesture-based interactions.
Shaspa Intelligent Shared Spaces And Sustainable DevelopmentDavid Wortley
1) Personalized and persistent relationships with technology are becoming embedded in our lives across all generations. Physical spaces are important for social and economic networks but lack intelligence for personalized experiences.
2) Personalized, persistent experiences will be increasingly demanded from physical spaces. SHASPA uses technologies like sensors, AI and data visualization to connect people to physical environments and support sustainable development.
3) SHASPA integrates emerging technologies into construction to reconnect people to the physical environment and support the environment and economy through innovative projects.
This document discusses contextual intelligence and the challenges of building digital assistants. It summarizes:
1) Contextual intelligence aims to understand relationships and utility to achieve goals with available resources, like practical "street smarts". Current digital assistants still face problems with privacy, missing information, latencies, accuracy, and providing truly actionable information.
2) Technologies alone cannot solve these problems - it requires understanding user concerns, capabilities for human conversational repair, giving users control of their data models, adjusting expectations based on application needs, and discerning what information a user already knows or what will help achieve their goals.
3) The grand challenge is for HCI research methods to create the knowledge needed to emulate human contextual intelligence
This document summarizes a study on evaluating visualization ranges over time on mobile phones through crowdsourced tasks. The study tested two layouts (linear and radial), three data granularities (week, month, year), and different tasks. Results showed that the linear layout and finer granularities like week led to faster completion times with fewer errors. However, performance also depended on the data type and task. The document discusses design implications like choosing appropriate combinations of layout and granularity based on the data and usage. It calls for more mobile visualization studies with different interaction methods and form factors.
Doesn't IT feel as if everything is about to change? And that you are the ones who can change it
Join Ian Aitchison as he describes how Process is becoming Physical, how re-thinking ITIL Event Management actually provides the key to changing the future of IT Service Management, and how dramatic shifts in current and pending technology have the potential to take us beyond the tipping point - into a new world of User Oriented IT.
Not just ideas and inspiration, this session contains practical examples of re-shaping 'back-end' ITIL activities into measurable improvements in IT Customer productivity.
If you don't engage now. You might not be engaged tomorrow.
For more about TFT please visit www.tomorrowsfuturetoday.com
Submit to speak at #TFT14 here: list.ly/list/7Pn-tft14-february-2014
Emergency Management Systems for your OrganisationIntergen
In particularly volatile times, we realise more than ever how important it is to plan for outcomes brought about by situations beyond our control. Emergency Management is high on the organisational agenda, and technology plays a pivotal role in helping us plan for emergency scenarios.
This is an opportunity to hear from Intergen’s Public Safety team on the use of Emergency Management systems.
Presentation for #TFT12: Location and the Future of the Interface
In this presentation, Geoloqi founder Amber Case will highlight why developers of apps should look at what users want to do now, as well as what users want to do in the future, why social apps should try to mirror real-world relationships, why sharing should be about who you share with as well as how long you're sharing, and why developers should think about how to make apps "ambient" and require less user interaction.
See Amber's TFT speaker Pinterest board: http://pinterest.com/servicedesk/amber-case/
Innovative Designs for the Embodied MindDiana Löffler
Innovative ideas break conventions. Breaking conventions often confuses the user, because user interfaces do not look or behave the way he is used to. To solve this problem we can base our designs on a level of 'conventional' knowledge, that is not based on expertise with technology. This level of knowledge is formed through interacting with our environment as embodied minds.
The document discusses the concept of context awareness in computing systems and the differences between sensor context and people context. Sensor context refers to machine-readable data like GPS coordinates and accelerometer readings, while people context refers to meanings derived from social and cultural practices. The document argues that a focus only on sensor context can lead to spurious connections and misses important aspects of how people understand and interact with technology in their lives. Understanding both sensor context and people context is important for designing context-aware systems that are useful and meaningful to users.
The document discusses technology trends of 2022 by examining theories of how technology innovations are adopted. It predicts that computers and TVs will be fully integrated, allowing access to social media and video calls. Wireless headsets are expected to provide a cleaner listening experience by eliminating wires. Most significantly, the document speculates that advances in virtual reality may allow gaming through mind control alone using only a headset.
The use of the i pad in and for qualitative researchMerlien Institute
The use of the ipad in and for qualitative research
by Frank-Thomas Naether
Presented at Merlien Institute's Qualitative Consumer Research & Insights Conference 2011
6-8 April 2011, Malta
More info at: www.merlien.org
The document discusses how mobile use of devices like smartphones and tablets is surpassing desktop PC use, with mobility becoming the norm for searching. It notes that search experiences are becoming more interactive and location-based on mobile. Developers are seeing significant revenue growth from mobile search apps. The ability to instantly search from any mobile device is changing how people access and use digital information and data. Mobility is transforming computing and digital experiences are increasingly centered around mobile calling and intuitive, all-in-one mobile devices.
Technology is changing the human experience, creating new connections between people, products and markets around the world. The computer is stepping out, off the desk, even out of our pockets, to become embedded in our world, around us, on us, and even in us. With this trend, user interaction will go above the glass, beyond the screen, and beyond pixels. In his talk, Brandon Edwards addressed the implications of these changes on consumer behavior, and the 5 futures of interaction design.
How to Build Your Future in the Internet of Things Economy. Jennifer RigginsFuture Insights
FOWA London 2015
The trillion-dollar IoT economy will impact our lives so much more than even the Internet itself. From IoT protocols to hypermedia APIs to devices to new networks of communication, you need to learn how to overcome very arduous security, privacy, and just-too-soon barriers in order to build your own future in the IoT space. Jennifer's talk is a result of talking to dozens of Internet of Things influencers and experts - come along to learn about her findings!
The network as a design material: Interaction 16 workshopClaire Rowland
Exploring the UX challenges which the properties of networks and connectivity patterns pose to connected products/the internet of things: latency, reliability, intermittent connectivity
This document discusses how geographic information systems (GIS) enable the convergence of disparate disciplines and create synergy. It begins by outlining the origins and multi-disciplinary nature of GIS, incorporating fields like cartography, geography, and computer science. It then explains how the visual and functional properties of GIS, like spatial analysis and visualization of relationships, allow for improved communication across domains. Finally, it explores how GIS can help redefine the future by facilitating new discoveries and "what if" scenario analysis through integration of diverse data sources.
Digital Marketing First 2014 - Context Aware Computing and Cross Channel Pers...Argus Labs
The document discusses context-aware computing and how Argus Labs is addressing it. Argus Labs has created a sensor fusion platform that can understand context, behavior, and mood using deep learning. It can profile users based on sensors to understand habits and predict human behavior. Argus Labs is applying this across industries like insurance, healthcare, advertising, and more to engage users based on their context in a personalized manner.
This document discusses context-aware mobile advertising and how it can be used to create real-time engagement. It notes that with the rise of wearables, connected devices and smartphones, there are now many sensors that can provide contextual data. Traditional mobile advertising relies on explicit user signals, while context-aware advertising uses implicit signals learned from sensors. This allows for richer and more accurate user profiles. The document describes Argus Labs' platform which uses sensors and deep learning to perform near real-time contextualization and behavioral profiling of mobile users. This can help advertisers better target ads based on factors like location, activity, and mood.
Taking Qualitative Research to the Cloud - Ericsson ConsumerlabMerlien Institute
Presented by Jasmeet Sethi, Regional Head of ConsumerLab, Ericsson
at Qualitative360 Asia 2013
19-21 November 2013, Singapore
This event is proudly organised by Merlien Institute
Check out our upcoming events by visiting http://qual360.com/
Simon Nash, an engagement and experience expert, introduces the concept of what we mean by "digital psychology" and how Reading Room are incorporating this into our core consultancy offering.
Mind the Gap - A Pecha Kucha Presentation by Pravin ShekarNFN Labs
The document discusses using different survey methods like SMS, IVRS, application-based, and automated voice response for conducting surveys across generations in India. It notes that SMS surveys allow for anytime, anywhere surveys with quick turnaround but have limitations like size constraints. Application-based surveys address some SMS limitations but have software and hardware compatibility issues. Automated voice response surveys are effective for screeners, short surveys, and open-ended questions, keeping respondents engaged. The document advocates using a combination of both online and offline methods like government kiosks, WAP, and mobile clinics to conduct surveys in developing nations.
IRJET- Gesture Recognition using Sixth Sense TechnologyIRJET Journal
This document discusses gesture recognition using Sixth Sense technology. Sixth Sense is a wearable device that uses a camera to recognize hand gestures and a projector to display digital information in the real world. It bridges the gap between the physical and digital worlds by allowing users to access online information through natural hand gestures. The device consists of a camera, projector, mirror, and is connected wirelessly to a smartphone. The camera captures gestures and sends the data to the phone for processing. The processed data is then projected using a mini projector onto a mirror and surface in the real world, augmenting digital information. The technology aims to free digital information from screens and integrate it into the physical world through gesture-based interactions.
Shaspa Intelligent Shared Spaces And Sustainable DevelopmentDavid Wortley
1) Personalized and persistent relationships with technology are becoming embedded in our lives across all generations. Physical spaces are important for social and economic networks but lack intelligence for personalized experiences.
2) Personalized, persistent experiences will be increasingly demanded from physical spaces. SHASPA uses technologies like sensors, AI and data visualization to connect people to physical environments and support sustainable development.
3) SHASPA integrates emerging technologies into construction to reconnect people to the physical environment and support the environment and economy through innovative projects.
This document discusses contextual intelligence and the challenges of building digital assistants. It summarizes:
1) Contextual intelligence aims to understand relationships and utility to achieve goals with available resources, like practical "street smarts". Current digital assistants still face problems with privacy, missing information, latencies, accuracy, and providing truly actionable information.
2) Technologies alone cannot solve these problems - it requires understanding user concerns, capabilities for human conversational repair, giving users control of their data models, adjusting expectations based on application needs, and discerning what information a user already knows or what will help achieve their goals.
3) The grand challenge is for HCI research methods to create the knowledge needed to emulate human contextual intelligence
This document summarizes a study on evaluating visualization ranges over time on mobile phones through crowdsourced tasks. The study tested two layouts (linear and radial), three data granularities (week, month, year), and different tasks. Results showed that the linear layout and finer granularities like week led to faster completion times with fewer errors. However, performance also depended on the data type and task. The document discusses design implications like choosing appropriate combinations of layout and granularity based on the data and usage. It calls for more mobile visualization studies with different interaction methods and form factors.
Doesn't IT feel as if everything is about to change? And that you are the ones who can change it
Join Ian Aitchison as he describes how Process is becoming Physical, how re-thinking ITIL Event Management actually provides the key to changing the future of IT Service Management, and how dramatic shifts in current and pending technology have the potential to take us beyond the tipping point - into a new world of User Oriented IT.
Not just ideas and inspiration, this session contains practical examples of re-shaping 'back-end' ITIL activities into measurable improvements in IT Customer productivity.
If you don't engage now. You might not be engaged tomorrow.
For more about TFT please visit www.tomorrowsfuturetoday.com
Submit to speak at #TFT14 here: list.ly/list/7Pn-tft14-february-2014
Emergency Management Systems for your OrganisationIntergen
In particularly volatile times, we realise more than ever how important it is to plan for outcomes brought about by situations beyond our control. Emergency Management is high on the organisational agenda, and technology plays a pivotal role in helping us plan for emergency scenarios.
This is an opportunity to hear from Intergen’s Public Safety team on the use of Emergency Management systems.
Presentation for #TFT12: Location and the Future of the Interface
In this presentation, Geoloqi founder Amber Case will highlight why developers of apps should look at what users want to do now, as well as what users want to do in the future, why social apps should try to mirror real-world relationships, why sharing should be about who you share with as well as how long you're sharing, and why developers should think about how to make apps "ambient" and require less user interaction.
See Amber's TFT speaker Pinterest board: http://pinterest.com/servicedesk/amber-case/
Innovative Designs for the Embodied MindDiana Löffler
Innovative ideas break conventions. Breaking conventions often confuses the user, because user interfaces do not look or behave the way he is used to. To solve this problem we can base our designs on a level of 'conventional' knowledge, that is not based on expertise with technology. This level of knowledge is formed through interacting with our environment as embodied minds.
The document discusses the concept of context awareness in computing systems and the differences between sensor context and people context. Sensor context refers to machine-readable data like GPS coordinates and accelerometer readings, while people context refers to meanings derived from social and cultural practices. The document argues that a focus only on sensor context can lead to spurious connections and misses important aspects of how people understand and interact with technology in their lives. Understanding both sensor context and people context is important for designing context-aware systems that are useful and meaningful to users.
The document discusses technology trends of 2022 by examining theories of how technology innovations are adopted. It predicts that computers and TVs will be fully integrated, allowing access to social media and video calls. Wireless headsets are expected to provide a cleaner listening experience by eliminating wires. Most significantly, the document speculates that advances in virtual reality may allow gaming through mind control alone using only a headset.
The use of the i pad in and for qualitative researchMerlien Institute
The use of the ipad in and for qualitative research
by Frank-Thomas Naether
Presented at Merlien Institute's Qualitative Consumer Research & Insights Conference 2011
6-8 April 2011, Malta
More info at: www.merlien.org
The document discusses how mobile use of devices like smartphones and tablets is surpassing desktop PC use, with mobility becoming the norm for searching. It notes that search experiences are becoming more interactive and location-based on mobile. Developers are seeing significant revenue growth from mobile search apps. The ability to instantly search from any mobile device is changing how people access and use digital information and data. Mobility is transforming computing and digital experiences are increasingly centered around mobile calling and intuitive, all-in-one mobile devices.
Technology is changing the human experience, creating new connections between people, products and markets around the world. The computer is stepping out, off the desk, even out of our pockets, to become embedded in our world, around us, on us, and even in us. With this trend, user interaction will go above the glass, beyond the screen, and beyond pixels. In his talk, Brandon Edwards addressed the implications of these changes on consumer behavior, and the 5 futures of interaction design.
How to Build Your Future in the Internet of Things Economy. Jennifer RigginsFuture Insights
FOWA London 2015
The trillion-dollar IoT economy will impact our lives so much more than even the Internet itself. From IoT protocols to hypermedia APIs to devices to new networks of communication, you need to learn how to overcome very arduous security, privacy, and just-too-soon barriers in order to build your own future in the IoT space. Jennifer's talk is a result of talking to dozens of Internet of Things influencers and experts - come along to learn about her findings!
The network as a design material: Interaction 16 workshopClaire Rowland
Exploring the UX challenges which the properties of networks and connectivity patterns pose to connected products/the internet of things: latency, reliability, intermittent connectivity
This document discusses how geographic information systems (GIS) enable the convergence of disparate disciplines and create synergy. It begins by outlining the origins and multi-disciplinary nature of GIS, incorporating fields like cartography, geography, and computer science. It then explains how the visual and functional properties of GIS, like spatial analysis and visualization of relationships, allow for improved communication across domains. Finally, it explores how GIS can help redefine the future by facilitating new discoveries and "what if" scenario analysis through integration of diverse data sources.
Digital Marketing First 2014 - Context Aware Computing and Cross Channel Pers...Argus Labs
The document discusses context-aware computing and how Argus Labs is addressing it. Argus Labs has created a sensor fusion platform that can understand context, behavior, and mood using deep learning. It can profile users based on sensors to understand habits and predict human behavior. Argus Labs is applying this across industries like insurance, healthcare, advertising, and more to engage users based on their context in a personalized manner.
This document discusses context-aware mobile advertising and how it can be used to create real-time engagement. It notes that with the rise of wearables, connected devices and smartphones, there are now many sensors that can provide contextual data. Traditional mobile advertising relies on explicit user signals, while context-aware advertising uses implicit signals learned from sensors. This allows for richer and more accurate user profiles. The document describes Argus Labs' platform which uses sensors and deep learning to perform near real-time contextualization and behavioral profiling of mobile users. This can help advertisers better target ads based on factors like location, activity, and mood.
This document discusses context-aware mobile advertising and how it can be used to create real-time engagement. It notes that with the rise of wearables, connected devices and smartphones, there are now many sensors that can provide contextual data. Traditional mobile advertising relies on explicit user signals, while context-aware advertising uses implicit signals learned from sensors. This allows for richer and more accurate user profiles. The document describes Argus Labs' platform which uses sensors and deep learning to perform near real-time contextualization and behavioral profiling of mobile users. This can help advertisers better target ads based on factors like location, activity, and mood.
Mobile user experience conference 2009 - The rise of the mobile contextFlorent Stroppa
The document discusses how mobile devices can leverage context awareness and sensors to improve the user experience. It describes how sensors like accelerometers, gyroscopes, microphones, and location sensors can provide information about the user's situation, environment and activity. With this context, devices can make smarter inferences and behave differently based on factors like location, time of day, activity, and the user's schedule and relationships. This will lead to devices that are less disruptive and more helpful. It also discusses challenges for user experience teams in designing for this new paradigm where inputs are no longer just from the user but also the environment and context.
The document summarizes the research of the Dynamics and Interaction group led by Roderick Murray-Smith. The group explores novel forms of mobile interaction design using inertial sensing, dynamics, and statistics. They have developed several prototypes including Shoogle for informative shaking and BodySpace using constraints in the environment. Their research applies concepts from control theory to create intuitive, honest interfaces that represent uncertainty and can regularize user behavior.
This document provides specifications for the NAO humanoid robot platform produced by Aldebaran Robotics. It describes the robot's hardware components including its Intel Atom processor, cameras, sensors, and 22 degrees of freedom of movement provided by its joints. It also outlines its software features such as computer vision capabilities, speech recognition and synthesis, and programming interfaces. Its applications are described as including education, research, and entertainment.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
it presents you
1.Introduction to Artificial Intelligence
2.History and Evolution
3.Speech synthesis
4.Robots and Image processing
5.Sensor fusion
6.Innovation in Artificial Intelligence
7.conclusion
The document discusses how mobile devices can leverage context to improve the user experience. It describes how mobile sensors, background processes, personal data, and artificial intelligence combined with the cloud can enable context-aware applications. This will allow mobile phones to behave differently based on factors like location, activity, and time, delivering a more intelligent experience for users.
Wearable Computing and Human Computer InterfacesJeffrey Funk
These slides discuss how improvements in ICs, MEMS, cameras, and other electronic components are making wearable computing and new forms of human-computer interfaces economically feasible. Improvements in digital signal processing ICs and MEMS-based microphones are rapidly improving the technical and economical feasibility of voice-recognition based interfaces. Improvements in 2D and 3D image sensors (e.g., camera ICs) are rapidly improving the technical and economical feasibility of gesture-based interfaces, augmented reality, and virtual reality. Improvements in ICs, MEMS, displays and other components are rapidly making many forms of wearable computing economically feasible; these include many forms of head, arm, torso, and leg-mounted displays. Improvements in the materials for both non-invasive and invasive brain scans are rapidly improving the technical and economical feasibility of neural interfaces.
Filip Maertens - Artificial Intelligence: Building Emotion & Context aware Re...BAQMaR
Argus Labs has created a sensor fusion platform that can understand the context, behavior, and mood of mobile users in real-time using deep learning. The platform can detect emotions, activities, and habits based on sensor data from devices. Argus Labs works with industries like insurance, media, and healthcare to apply contextual insights about users for applications like personalized recommendations, usage-based insurance, and diagnostic support.
Artificial Intelligence - An Introduction acemindia
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man made," and intelligence defines "thinking power", hence AI means "a man-made thinking power.“
Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems.
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power.“
This document describes a voice-operated wheelchair system that allows disabled users to control a wheelchair through voice commands. The system uses a microcontroller, wireless microphone, voice recognition processor and motor control interface to integrate voice command functionality. It is trained to recognize basic movement commands like forward, reverse, left and right. When a user speaks a command into the microphone, the voice recognition processor detects the word and sends the corresponding signal to the microcontroller to drive the motors and move the wheelchair. This system is designed to give wheelchair users independence by enabling control through their voice.
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
Navigation Assistance for Visually Challenged PeopleIRJET Journal
This document presents a navigation system to assist visually impaired people. The system uses a combination of technologies including voice guidance, ultrasonic sensors for obstacle detection, and artificial intelligence. It consists of (1) determining the user's position and orientation, (2) detecting obstacles using ultrasonic sensors, and (3) a user interface. The system provides audio navigation instructions to guide the user and avoid collisions with obstacles. It is designed to allow visually impaired people to safely navigate unfamiliar indoor environments independently without relying on others.
This document discusses non-intrusive methods for recognizing a driver's emotions using vision and acoustic sensing in an advanced driver assistance system. It describes how emotions can impact driver attentiveness and safety. Six primary emotions are identified: anger, disgust, fear, happiness, sadness, and surprise. Various techniques are discussed for extracting visual features from face images and acoustic features from speech to classify emotions, along with their advantages and limitations. Prior work on emotion recognition from speech using Hidden Markov Models, spectral features, and other approaches is also summarized.
This document summarizes an event organized by Pantech Solutions and the Institution of Electronics and Telecommunication (IETE) on the future of artificial intelligence. The event featured several presentations and demos on topics related to AI, including computer vision with deep learning, natural language processing, machine and deep learning, AI applications in various domains like medical, agriculture, autonomous vehicles, and brain-computer interfaces. It also discussed topics like machine learning, deep learning, AI safety concerns, and examples of AI applications in areas like search engines, social media, e-commerce, music and more. The agenda included presentations on object recognition with YOLO, brain enhancement with BCI technology, and a Python AI demo.
Network Driven Behaviour Modelling for Designing User Centred IoT ServicesFahim Kawsar
We are observing a monumental effort from the industry and academia to make everything connected. Naturally, to understand the needs of these connected things, we need a better understanding of humans and where, when, and how they interact. Then we can create digital services and capabilities that fundamentally change the way we experience our lives. IoT 1.0 is all about connectivity, and scale. IoT 2.0 will be about learning and contextual automation. Designing intention- and behavior-aware services will be the principal source of differentiation, and competitive advantage for the industry players. In this talk I argue that for wide scale adoption, and market penetration of personalized IoT services, existing network infrastructure should play the key role for sensing and learning, by eliminating the cost of deployment and management of many sensors. I will show then how wireless network can be used as a sensing platform to model human behaviour and to redefine people-content, people-thing, and people-people interaction experience in an IoT enabled world.
The document discusses massive sensing from both current and future perspectives, including the types of sensors in phones now, the concept of the Internet of Things connecting billions of sensors to share data through the cloud, and the potential for future sensing technologies like embedded sensors and their implications for applications in areas like health, environment, and cities.
Similaire à Earables for Personal-scale Behaviour Analytics (20)
Sensing WiFi Network for Personal IoT Analytics Fahim Kawsar
We present the design, implementation and evaluation of an enabling platform for locating and querying physical objects using existing WiFi network. We propose the use of WiFi management probes as a data transport mechanism for physical objects that are tagged with WiFi-enabled accelerometers and are capable of determining their state-of-use based on motion signatures. A local WiFi gateway captures these probes emitted from the connected objects and stores them locally after annotating them with a coarse grained location estimate using a proximity ranging algorithm. External applications can query the aggregated views of state-of-use and location traces of connected objects through a cloud-based query server. We present the technical architecture and algorithms of the proposed platform together with a prototype personal object analytics application and assess the feasibility of our different design decisions. This work makes important contributions by demonstrating that it is possible to build a pure network-based IoT analytics platform with only location and motion signatures of connected objects, and that the WiFi network is the key enabler for the future IoT applications.
Designing UX for the Internet of ThingsFahim Kawsar
This document summarizes a talk about designing interaction for consumer internet of things (IoT). It discusses how current IoT interactions are app-driven, object-centric, spatially fixed, and temporally constrained, which differs from how humans naturally interact in an activity-centric, spatially distributed, and temporally dispersed manner. The document advocates for designing reflective user experiences for IoT that are spontaneous, personalized, opportunistic, and activity-aware by using techniques like purposeful data, activity awareness, opportunistic interfaces, ambient attention, personalization, and storytelling to better match human behaviors and intentions.
Creative Media Days 2012 Talk on Opportunistic Activity ModelingFahim Kawsar
This document discusses opportunistic analytics for modeling human activity. It presents a methodology that involves collecting and combining data from multiple sources to increase information density, segmenting and profiling behaviors, and inferring activity trajectories. Two case studies are described: one using location-aware social media data to identify 10 activity types, and another using in-home internet activity to map applications to 8 activities. The studies demonstrate enhancing data density and predicting future activity patterns with over 70% accuracy.
Pervasive 2011 Talk on Situated GlyphsFahim Kawsar
This document describes the development of a visual language system (VLsys) for representing medical concepts. The VLsys uses iconic representations combined in a modular way to depict complex concepts. It was evaluated against a word-based display and found to support faster understanding and identification of related concepts. The VLsys focuses on coordination and supporting collaborative discussion across medical experts through its use in desktop environments. Key elements of the VLsys include:
- Representing medical concepts like diseases, drugs, and tests through iconic symbols that are combined modularly
- Organizing icons in a hierarchical structure with contextual relationships
- Providing text explanations on rollover of icons
- Enabling identification and exploration of related concepts
The VLsys
MobileHCI 2010 Talk on Smart Object Interaction Fahim Kawsar
This talk compares two interaction techniques : mobile augmented reality and personal projection in the context of smart object and internet of things interaction.
Fahim Kawsar, Enrico Rukzio, and Gerd Korutem; "An Explorative Comparison of Magic Lens and Personal Projection for Interacting with Smart Objects "; 12th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2010), Lisboa, Portugal September 7th-10th, 2010.
IoT 2010 Talk on System Infrastructure for the Internet of Things.Fahim Kawsar
Supporting Article:
Fahim Kawsar, Gerd Kortuem and Bashar Altakrouri "Supporting Interaction with the Internet of Things across Objects, Time and Space "; Internet of Things 2010 Conference (IoT-2010), Nov 29 - Dec 1, Tokyo, Japan.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
3. Cognitive Assistant - Seamless Extension of the Inner Human Cognition
24/7 Contextual Assistant Strengthening Willpower
Safety & Adherence Assistive Guidance
@raswak
4. Help us to communicate better Help us to sleep better
Help us to focus better Help us to remember and recall better
@raswak
5. @raswak
- Cross Device Interactions
- Spans Across Space and Time
- Ultra Personalised
Behavioural UX
Accessing
everything
Controlling
everything
Understanding
everything
Sensing + Understanding
you and the world around you
7. AI Assisted Quantified Enterprise
Implication: People and Space Analytics
Location is the key context. Social signals can be extracted from location traces
Web Summit
Largest Tech Conference in the Planet
2015 @ Dublin
40K+ Attendees, 134 Countries
±6000 Sq. Meter
Startups, Entrepreneurs, Investors …
Long Term Feedback
Actionable Feedback
Community Driven Feedback
Privacy plays a critical role in users’ decision making process
Form needs an primary established purpose for sustainable engagement
Lessons
Understand, quantify and radically transform how people interact, feel, collaborate
and work together in the real enterprise for personal, group and larger organisation
efficiency.
@raswak
ACM UbiComp 2015, 2016, ICMI 2016, MobileHCI 2016
8. Actionable and Longterm Feedback at the right moment is key to sustainable engagement
Battery performance is absolutely important
Privacy plays a critical role in users’ decision making process
Form needs an primary established purpose for sustainable engagement
Lessons
@raswak
12. - With immediate and subtle interaction
- Unique placement for robust sensing
- Intimate and privacy preserving
- With an established purpose
- Aesthetically beautiful
- Ergonomically comfortable
The most personal
device
yet
Earables
@raswak
Sense
Learn
Act
Sensor
Sensor
AI/ML Models
13. @raswak
eSense Earable
Signal-to-Noise Ratio (SNR) of eSense in comparison to a smartphone and a
smartwatch concerning motion and audio sensing.
CSR Processor Flash Memory
45 mAh Li-Po Battery Contact Charging
Speaker
6-axis IMU Sensor
MicrophonePush Button
Multi Colour LED
Bluetooth/BLE
Size : 18x18x20 mm
Weight: 20 g
IEEE Pervasive 2018
15. eSense Earable
Over 90% accuracy with accelerometer only
Can further expand the set of head gestures to tilting, turning, …
PERFORMANCE
MULTIMODAL MODEL
SIGNAL BEHAVIOUR
Cleaner signals from the earbuds due to unique placement
HEAD GESTURE
Detection of basic head gestures with IMU signals
Nodding and Shaking
Gyroscope
Accelerometer
Nodding Shaking
Nearest Neighbour
Statistical
Features
Gyroscope Combined
Features
• Nodding
• Shaking
• Other
Statistical
Features
Accelerometer
F1score
0
0.25
0.5
0.75
1
Fusion Accelerometer Gyroscope
0.850.890.93
@raswak
ACM WearSys 2018
16. eSense Earable
IMU signal when walking
Accelerometer Gyroscope
Nearest Neighbour
Statistical
Features
Gyroscope
Combined
Features
• Stationary
• Walking
• Stepping up
• Stepping down
• Other
Statistical
Features
Accelerometer
AverageF-1score
0.00
0.25
0.50
0.75
1.00
Fusion Accelerometer Gyroscope
0.62
0.950.96
Over 90% accuracy with accelerometer alone
More robust to placements compared to watch and phone
PERFORMANCE
MULTIMODAL MODEL
SIGNAL BEHAVIOUR
Cleaner signals from the earbuds due to small head movements
PHYSICAL ACTIVITY
Detection of basic activities with IMU signals:
stationary, walking, stepping up and stepping down
@raswak
ACM WearSys 2018
17. eSense Earable
PERFORMANCE
MULTIMODAL MODEL
SIGNAL BEHAVIOUR
Cleaner signals from the earbuds due to small head movements
DIET
Detection of basic activities with IMU signals:
drinking and chewing
Audio spectrogram when chewing Gyroscope data when drinking
Random Forest
MFCC
Statistical
Features
Accelerometer
Gyroscope
Microphone • Drinking
• Chewing
• OtherCombined
Features
78% accuracy for fusion classifier even with simple features
Outperforms single-sensor classifiers
F-1score
0
0.2
0.4
0.6
0.8
Fusion Audio IMU
0.69
0.3
0.78
0.21
0.70.73
Chewing
Drinking
@raswak
ACM MobiSys 2018
18. eSense Earable
PERFORMANCE
SIGNAL PROCESSING
SIGNAL BEHAVIOUR
Amplified sound of heartbeats can be easily captured due to placement
HEART RATE
Simple filtering and peak detection is enough for reliable
detection
Average error of 2.4 BPM
Capable of detecting heart rate from in-ear microphone
Following ECG Pattern
Raw Signal
Microphone
Low-Pass
Filter
Amplifier
Z
Peak
Detector
Z
Heart rate
Beatsperminute
0
30
60
90
Ours Ground truth
81.684.0
@raswak
19. eSense Earable
PERFORMANCE
MULTIMODAL MODEL
SIGNAL BEHAVIOUR
Cleaner phase response from IMU to detect speech segment
CONVERSATION
Detection of speech segments using IMU and simple,
lightweight classifier
85% accuracy in speaking detection only with inertial sensors
Much more robust to ambient noise, e.g., nearby person’s speaking
Energy efficient trigger of more expensive microphone
SVM
Statistical
Features
Gyroscope
Combined
Features
• Speaking
• Non-speaking
Statistical
Features
Accelerometer
F1score
0
0.2
0.4
0.6
0.8
1
Audio IMU All
0.880.86
0.65
+ 20%
@raswak
ACM WellComp 2018
20. eSense Earable
PERFORMANCE
MULTIMODAL MODEL
SIGNAL BEHAVIOUR
Cleaner phase response from IMU to detect facial expression
FACIAL EXPRESSION
Fusion of IMU and Audio signals with SVM followed by HMM
Smoothing
70-80% F1 score with statistical features
High user variability for ‘smiling’ expression
Gyroscope data Camera
Stationary Pull Up Movement
Pull
Down
Stationary
SVM
• State 1
• State 2
• State 3
• …
MFCC
Statistical
Features
Accelerometer
Gyroscope
Microphone
Feature
Selection
HMM
• Laugh
• Smile
• Frown
• Other
F1score
0
0.2
0.4
0.6
0.8
1
Other Smile Laugh Frown
0.740.68
0.61
0.81
@raswak
ACM AH 2019
21. Situation-Aware Conversational Agent
Bringing cognition to conversational agents to radically transform their ability to assist and augment
human
KEY OBJECTIVE
Customer Experience, Conversational Commerce, Digital Health, Entertainment, Education
Home Automation and Life Style.
KEY APPLICATIONS
KEY INNOVATION
• AI-assisted software platform to understand emotion and situation at personal-scale.
• AI-as-a-Service to enable conversational agents to become situation-aware and dynamically adjusts
its conversation style, tone, volume in response to users emotional, social activity and environmental
context
Emotion
Awareness
Sociality
Awareness
Activity
Awareness
Realtime
Adaptation
KEY NUMBERS
97.8%
RECOGNITION
ACCURACY
1.2
RECOGNITION
LATENCY
2.48
E2E
LATENCY
SEC
SEC
@raswak
ACM ACII 2019
22. 360 Wellbeing Management and Cognitive Augmentation
People and Space Analytics
Stress and Happiness Analytics
Physical Social Network
Understand, quantify and radically transform how people interact, feel, collaborate
and work together in the real enterprise for personal, group and larger organisation
efficiency.
Implication
Key Objective
• Audio and Motion Sensor Processing
• Speech and OneTouch Interactions
• HD Quality Music
• Speech Recognition
• Speech Synthesis
• Notification Management
• Context Processing
BLE Localisation
• External Service Interaction
• Conversational Agent
• Selective Rule Engines
APP
Inference Engine for Realtime Context Awareness
End-to-End Architecture
Audio Conversational Activity
Audio Environment Dynamics
Audio Emotion
Motion Head Gesture
Motion Physical Activity
Location Face to Face Interaction
AI Model
AI Model
MFCC
Statistical
Features
BLE RSS
Accelerometer
Gyroscope
BLE
Microphone
• Heart Rate
• Emotion and Stress
• Eating and Drinking
• Conversation
• Ambient Environment
• Stationary
• Walking
• On-Transport
• Head Gesture
• Placement
• Social Interaction
• Proxemic Interaction
AI Model
• Sampling Rate
• Duty Cycle
CONTEXT PRIMITIVES CONFIGURATION
• Sampling Rate
• Duty Cycle
• Packet Interval
+
+
@raswak
23. Interaction with People, Places,
and Things On-the-Go.
Feedback on Physical and
Mental Well Being
Feedback on Collaboration, and
Social Behaviour
Personalised Recommendation
on Wellbeing
@raswak
28. Multiple devices offers more, better, and longer learning
opportunities at the expense of significant complexity.
1
Design for Multiplicity - Cognitive Orchestration
How to select, combine and compose devices to construct a dynamic
sensing pipeline contextually for highest QoS?CHALLENGE
@raswak
29. COGNITIVE ORCHESTRATION
2x accuracy gain at the expense of 13 mW energy
4x energy gain - inversely proportional to number of
devices
Learning the Runtime sensing quality of multiple devices using Siamese Neural Net
Predicting the best inference path addressing device and usage variability
Eliminate redundant computation.
Multi-Device Sensory AI Systems
Select and orchestrate the best devices for the task at hand
maximising accuracy and mining energy
SenSys 2019
Motion based Physical Activity Detection Audio Prosody based Emotion Detection@raswak
30. Design for Robustness. - Cognitive Translation
2
Environment - Environment Translation
Device - Device Translation
OS - OS Translation
Sensor - Sensor Translation
Guarantee a model to withstand its functional behaviour across
heterogenous conditions
Every single execution environment (sensor, device, OS, user) is different.
How to build robust sensory systems for 100 billion AI devices
(some of which are not invented yet)?
CHALLENGE
@raswak
31. COGNITIVE TRANSLATION
0%
25%
50%
75%
100%
iPhone S8 Mic2Mic
Loss
Recovery
0%
25%
50%
75%
100%
Thigh Chest Accel2Accel
Loss
Recovery
Audio Signal - Device Variability Motion Signal - User Variability
Accuracy Accuracy
Recover up to 90% of the accuracy lost due to device
variability using 15 minutes of unlabelled data.
Generative Models for Domain Adaptation and Domain Generalisation
Brand-new Model Architecture with CycleGAN principles for learning domain
translation functions
Robust and Future-Proof Sensory AI Systems
Sensory models that work irrespective of how and where the
sensor data is collected.
IPSN 2018, IPSN 2019
@raswak
CASE 1 CASE 2
32. Qualitative insights need to shape the systems’ runtime behaviour3
How to extend shape System’s behaviour at different phases in a
personalised way?
Turn user interaction into learning parameters
CHALLENGE
@raswak
33. COGNITIVE EXTENSION
Privacy Preserving and Personalised Extension of
Sensory AI Systems M
M
Running
M
M
Cycling
Swimming
M
M
Running
M Cycling
M Swimming
Time
On-Device Continual Learning
APPROX
POSTERIOR
M M
M
Upper Bounded KL Loss Cross Entropy Loss JSD Loss
PRIOR
APPROX
POSTERIOR M
Labelled Data
x y x’ x’’
MM
+
Unlabelled Data
Data
Augmentation
s s’ s’’
0
0.5
1
Period 1 Period 2 Period 3 Period 4 Period 5
Other Walking Sitting Walking Upstairs
Walking Downstairs Standing Laying
0
500
1000
1500
Period 2 Period 3 Period 4 Period 5
986
1286
1374
1211
362
461
590
728
Retained Samples New Samples
Continual Learning Accuracy for Motion Tasks Continual Learning Data Requirement
Accuracy
90% accuracy across multiple learning periods for extension
Only 10% data is retained
Labelled data reduction by 80%
Semi-supervised Bayesian continual learning
Small and imperfectly labelled supervised datasets
Rich approximate posteriors with uncertainty estimates
Extend sensory systems ability in a personalised and user-
defined way using on-device continual learning
@raswak
34. Design for Efficiency (and Privacy) - Cognitive Efficiency
Inference
Performance
Privacy
Protection
Energy
Awareness
Scale down cloud-scale algorithms to run locally on devices
Where will we find the next 10xgain?CHALLENGE
4
@raswak
35. COGNITIVE EFFICIENCY
Online Model Compression
Compress deep neural networks with negligible degradation in accuracy
Dynamic Model Fusion
Simultaneous execution of multiple models through parallelisation of parameter
heavy and computation heavy layers
Optimal Resource Allocation
Reduce energy footprints of neural networks and allocate an optimal set of
resources at runtime
Inference
Performance
Privacy
Protection
Energy
Awareness
Factorisation reduces memory and
computational requirements
1.5x gains in overall execution time
With runtime model fusion
Privacy Preserving Software Accelerator for
Sensory AI Systems
IPSN 2016, SenSys 2016, MobiSys 2017, IEEE Pervasive 2017
@raswak
36. Design needs to shape the understanding ability of the IoT Systems5
COMFORT MEMORABLE CONVERSATION
From recognition to understanding — {Design} enabled understanding
How to define the learning targets based on UX, and not the literals
towards a universal understanding model?CHALLENGE
@raswak
37. Intelligibility
6 Engage users and keep them informed about system’s behaviour
How to embed intelligibility in sensory system’s behaviour?
Answer the WHY ?
CHALLENGE
@raswak
38. Design needs to guide AI-assisted wearables failover strategy7 Design for AI Failure
How to guide the intelligibility of Sensory Systems in dealing with
failure, and in deciding when to engage human for right UX?CHALLENGE
@raswak