Research presentation on Autonomous Driving. Direction perception approach.
Research work by Princeton University group.
Note: Link given in the presentation
4. Evolution in Car Technology – 1st Gen
Manual Transmission
Combustion Engine
Basic Functions
Min Fuel Efficiency
5. Evolution in Car Technology – 2nd and Current Gen
Auto - Transmission
Hybrid Technology
Cruise Control
High Fuel Efficiency
Multimedia and GPS Nav
6. Evolution in Car Technology – Future Gen
Driver-less Intelligent Car
Autopilot Switching
Automatic Parking
Automatic Braking
Traffic Sign Recognition
8. Behavior Reflex
Directly Map input image to steering angles using Neural Nets
To learn the model, Human drives car on road and recorded images is training data
Human decisions are unpredicted in similar situations makes difficult to learn
Less accurate results
9. Mediated Perception
Parse the entire scene to make driving decisions
Recognize lanes, cars, traffic signs and lights, pedestrians etc.
Combines results then send to AI based engine to make decision
Current state of the art approach in Autonomous driving
Used by Google car, Mobil eye etc.
10. Direct Perception
Fall in b/w Mediated perception and Behavior reflex
Based on deep ConvNet
Estimate few key affordance indicators and compute driving commands based on them
More accurate results and powerful autonomous experience
18. TORCS-based direct perception ConvNet on real driving videos taken by a
smartphone camera
Trained and tested in two different domains
Testing on Real-World data - Smartphone Camera Video
19. Lane perception module works perfectly
Determine the correct lane configuration
Car perception module is slightly noisier
Computer graphics model of cars in TORCS look quite
different from the real ones
Results
20. KITTI dataset contains over 40,000 stereo images
For each image, we define a 2D coordinate system on the
zero height plane
The “Y: axis is along the host car’s heading, while the “X”
axis is pointing to the right of the host car
Estimate the coordinate (x, y) of the cars “ahead” of the
host car in this system
Car Distance Estimation on KITTI dataset
21. Due to the low resolution of input images, cars far away can
hardly be discovered by the ConvNet
Final distance estimation is a combination of the two
ConvNets Outputs
Results
22. Compare the performance of our KITTI-based ConvNet with
the state-of-the-art DPM Car Detector
Comparison with DPM-based baseline
23. On an image dataset of 21,100 samples
4,096 neurons in the first fully connected layer
Top 100 images from the dataset that activate the neuron
What this neuron learned from training – Visual representation
Visualization
24. We propose a autonomous driving paradigm based on
“Direct Perception”
Experiments show that our approach can perform well in
both Virtual and Real environments.
Conclusions
25. By:
Chenyi Chen
Ari Seff
Alain Kornhauser
Jianxiong Xiao
Princeton University
Website: http://deepdriving.cs.princeton.edu
Original Research Work Titled:
Deep Driving: Learning Affordance in Autonomous Driving
Funded By:
Google
Intel
Nvidia Corp.