1. Machine Learning with Ubiquitous Sensing:
The Case of Robust Detection and
Classification of Targets in Close Proximity
Authors: Varun Garg, Brooks P. Saunders and Thanuka Wickramarathne
University of Massachusetts Lowell, Lowell, MA 01854 USA
May 28, 2023
4. Research Overview
• Focus on Situational Awareness (SA) in complex situations involving
inter-dependent entities, e.g., autonomous driving, emergency
response and disaster management
• A rapidly growing interest for the use of Ubiquitous Sensing, e.g.,
smart-phones, in-built vehicle sensors
• Our Research: Use of ubiquitous sensing for enhanced SA
• This paper: Can we utilize Machine Learning (ML) for some
challenging tasks associated with the use of ubiquitous sensing for SA?
3
5. Application Example: Road Threat Assessment...
• Road quality assessment and repair is a slow and costly proposition
• In US, $300 per car per year is spent by motorists in repair costs)
• US Congress allocated $305B in 2008 (soon after the financial crisis)
• Increasing number of methods using MEMS 3-D accelerometers
• Vary from simple thresholding to classification to advanced filtering
• Often approached as a Machine-Learning (ML) problem
• Use of accelerometers is more attractive (cf. Radar, imaging, etc.)
• low-cost, low-complexity, can be deployed to virtually all vehicles
• smartphones can be used for sensing/processing/comm
• QUESTION: Can we use ML with Ubiquitous Sensing for SA?
• transportation infrastructure health management
• energy efficient driving, e.g., optimal speed profiles
• safety and comfort of motorists
4
6. Illustrative Application: Road Threat Assessment...
• Our proposed framework uses smart-phones and in-built vehicle
sensors
• SA is defined in terms of road hazards/conditions other related
parameters for energy efficiency, safety and comfort
• Distributed identification of events/parameters in (near) real-time
• Uses a bunch of L1/L2/L3 fusion methods for different tasks
(a) Potholes (b) Cracks
(c) Bumps (d) Other
5
8. Use of ML in Close Proximity
Threat Detection and Classification
9. ML for Threat Detection and Classification
(e) Multiple Hazards (f) Vibration Measurement
7
10. ML for Threat Detection and Classification...
(g) Raw acceleration (h) Spectrogram
8
11. Features : ML for Threat Detection
Feature Description # Features
X-axis
f1 Mean 1
f2 RMS 1
f3 Autocorrelation at 0 (i.e., signal energy) 1
f4 Autocorrelation second peak magnitude 1
f5 Autocorrelation second peak lag 1
f6 − f11 Frequency at highest 6 spectral peaks 6
f12 − f17 Power at highest 6 spectral peaks 6
f18 − f27 PSD at pre-defined frequency bands 10
Y-axis
f28 Mean 1
.
.
.
.
.
.
.
.
.
f45 − f54 PSD at pre-defined frequency bands 10
Z-axis
f55 Mean 1
.
.
.
.
.
.
.
.
.
f72 − f81 PSD at pre-defined frequency bands 8
Tri-axis
f82 total acceleration energy 1
f83 Pearson Correlation Coefficient, ρ(ẍ, ÿ) 1
f84 ρ(ÿ, z̈) 1
f85 ρ(ÿ, z̈) 1
9
12. Image Classifier: Threat Detection and Classification
Figure 1: Classification using MobileNet based object detection
model
• Mobilenet based multi-class object detection deep learning model was utilized to
detect and classify the ST phenomena
10
13. ML for Threat Detection and Classification...
{While-line-blur, Cross-walk-blur} (see Table III
for classifier performance using MobileNet Model).
TABLE II
IMAGE CLASSIFIER: CLASS DEFINITIONS AND FOD MAPPING
Damage Type Description Class ⇥(i)
Crack Linear Longitudinal Wheel-marks D00
CR
Construction-joint D01
Lateral Equal-interval D10
Construction-joint D11
Alligator-Crack Pavement D20
Others Rutting, Separation
D40
OC
Bump BP
Pothole PH
White-line-blur D43
RB
Cross-walk-blur D44
TABLE III
IMAGE CLASSIFIER PERFORMANCE
Predicted Class Projected Class
D00 D01 D10 D11 D20 D40 D43 D44 CR BP PH OC RB
pin
defi
{D
RB
pro
tion
and
is
Po
a d
tho
to
any
OC
L
cla
of
by
11
14. ML for Threat Detection and Classification...
(a) Multiple Hazards (b) Vibration Measurement
• Image classifiers: good for identifying ‘visually’ different threat types
• Fails to differentiate b/w different but ‘visually’ similar threats in close
proximity
12
16. Our Approach
• We model multi-modality classifiers as logical sensors
• ML classifier model performance characteristics were used for sensor
modeling and alignment
• These logical sensors are then utilized in a decision-level fusion setup
• This allows to exploit complementary sensing capabilities
• Fewer data points per classification directly affects sensor reliability
• One way to circumvent this is to use belief revision
13
18. MEMS Data Normalization : Suspension System Modeling
q(t) =
m
c
ẍ + ẋ +
k
c
x (1)
• here x and y are horizontal and vertical shift distance respectively
• here m, k, and c are mass, spring coefficient and damping coefficient
respectively
• In [1], it is known that vertical displacement can be found using (m/c) and (k/c).
• Ratios (m/c) and (k/c) can be estimated using the frequency of under-damping
vibration and amplitudes of multiple consecutive MEMS samples.
• Figure and modelling credits [1]
15
19. MEMS Data Normalization : Parameter Identification
• (m/c) and (k/c) found using the frequency of under damping vibration and
amplitudes of multiple consecutive MEMS samples. Figure credits [1] 16
20. Time synchronization of MEMS Location Data Samples
x = v ⇧ t
longitude
latitude
acceleration
. . .
. . .
Z[k] data frame
Z[k] data frame
• Events/objects are classified based on ‘features’
• Challenge lies with the number of data points
• E.g., at 30mph driving speed, you’ll collect about 7 data points over 1m
at 100Hz sampling rate
• What do we do now?
17
21. Time synchronization of Camera MEMS Data Samples
𝛿d
𝜏0
target
t0
tk
d0
time
distance
tk+1 tk+2
Ik Ik+1 Ik+2
inertial measurements
image acquisition
time
18
22. Sensor Alignment: Vibration Classifier
• Without discounting for sensor reliability:
P
(v)↓Θ
k (θ|S
(v)
k ) = P(v)
(θ|S
(v)
k , Θ),
• With discounting for sensor reliability:
P
(v)↓Θ
k (θ|S
(v)
k ) = λ
(v)
k P(v)
(θ|Θ, S
(v)
k ) + (1−λ
(v)
k )/|Θ|,
19
23. Sensor Alignment: Image Classifier
P
(i)
k (θ | S
(i)
k ) ≈
P
s∈{D00,...,D20};
∀ s∈S
(i)
k
C
(i)
k (s), θ = CR
P
s=D40;
∀ s∈S
(i)
k
C
(i)
k (s), θ = OC, BP, PH
P
s∈{D43,D44};
∀ s∈S
(i)
k
C
(i)
k (s), θ = RB
P
(i)↓Θ
k (θ | S
(i)
k ) = λ
(i)
k P
(i)
k (θ|Θ, S
(i)
k ) + (1−λ
(i)
k )/|Θ|,
20
24. Decision-level Fusion and Belief Revision
• Decision-level Fusion:
Pk (θ|S
(v)
k , S
(i)
k ) = P
(v)↓Θ
k (θ | S
(v)
k ) ⊕ P
(i)↓Θ
k (θ | S
(i)
k ),
• Belief Revision:
P1:k+1(θ) = αk P1:k (θ) + (1 − αk )Pk (θ|S
(v)
k , S
(i)
k ),
21
31. Results: A Single Scenario of Multiple Threat Detection
0 1 2 3 4 5 6 7 8
Trial
0.1
0.2
0.3
0.4
0.5
Belief Bump
Crack
Pothole
Dirt Road
Rough Road
Figure 2: Probability of different threats with respect different data collection trials
• The probability of the existence of road threats pothole and a crack is much
higher than the existence of other threats.
27
33. Concluding Remarks
• Use of ubiquitous-sensing in Situational Awareness will
likely have a major impact on many domains
• Use of ML and AI can assist in challenging
detection/classification tasks
• Blindly applying ML or AI techniques will likely not improve
performance
• Leverage the rich literature on fundamentals of
multi-sensor multi-modality fusion
28
34. Concluding Remarks...
• We have demonstrated the potential with a specific
application example
• On-going related work involves advanced modeling to
further improve performance
• Look out for an upcoming journal paper on complete
modeling details and extensive evaluation
• Latest on-going work on SAFENETS involves
estimation/detection of dynamic spatio-temporal
phenomena
29
35. References i
G. Xue, H. Zhu, Z. Hu, J. Yu, Y. Zhu, and Y. Luo, “Pothole in
the dark: Perceiving pothole profiles with participatory
urban vehicles,” IEEE Transactions on Mobile Computing,
vol. 16, no. 5, pp. 1408–1419, 2017.
30