Ramsundar Kalpagam Ganesan is a graduate student in Computer Engineering at ASU, with research interests in Computer Vision. He is currently working as a graduate researcher at the Interactive Robotics Lab, advised by Prof. Heni Ben Amor. His master's thesis involves developing an augmented projection system that can track objects and project information onto them using Computer Vision techniques. He has experience with various Computer Vision and machine learning projects involving facial expression recognition, obstacle detection, spike sorting, and more.
1. RAMSUNDAR KALPAGAM GANESAN
+1 480-374-9262
ramsundar@asu.edu
linkedin.com/in/kgram007
My Portfolio
I am a graduate student in Computer Engineering, with my
research interests focused on the principles and applications
of Computer Vision. I am currently working as a graduate
researcher at Interactive Robotics Lab of ASU, advised by Prof.
Heni Ben Amor. I am a very passionate person, having
everlasting thirst to acquire knowledge, learn new skills and
apply them to innovate new things.
Object Tracking and Augmented Projection C++ | OpenCV | OpenGL | Linux | Augmented Reality | Computer Vision
Currently working on my Master’s thesis related to Augmented Projection system,
which could track objects and project augmented information on the object using
Computer Vision techniques. My focus on the project is in developing and
optimizing a model based (using edges) object tracker for Human-Robot
collaborative environments, utilizing the image feed from a monocular camera.
Eye Controlled Wheelchair LabVIEW | Computer Vision | Embedded | Arduino | C
Designed and implemented a system to track a paralyzed patient’s eye
gazes and in-turn drive the wheelchair in intended direction. I developed
the image processing algorithm using LabVIEW to accomplish eye
tracking, by processing the images from a head-mounted webcam. I was
also involved in developing the embedded hardware that interfaced the
computer running eye tracker and the low level motor driving circuit.
Winner of NIYANTRA, a National Embedded System
Design Contest conducted by National Instruments
https://goo.gl/4o3XGC
Facial Expression Recognition using Local Features MATLAB | Computer Vision | Machine Learning | FER
Developed and tested machine learning models using Neural
Networks and SVM to recognize expressions from human face
images. Computer Vision techniques were used to pre-process,
extract and localize the features from the raw image. The features
used for training the model were Local Binary Patterns and Gabor-
Wavelet coefficients.
An Image Processing Approach to Detect Obstacles on Road C++ | OpenCV | Visual Studio | Computer Vision
Developed a vision based system to detect obstacles on road by processing the video
frames obtained from a front mounted camera of a vehicle. The system was designed
and optimized for detecting static obstacles on road, such as pot holes and speed
bumps, which are prevalent in Indian roads. The system could also provide a rough
estimate of the distance from the vehicle to the obstacle. This information is also
broadcasted as a CAN message to the vehicles CAN network.
2. Automatic Spike Sorting using Wavelets MATLAB | Machine Learning
Developed an unsupervised learning algorithm that could automatically cluster
and classify the neural spike data recorded from a rat’s brain. After
experimenting with different features, it was found that the wavelet features
obtained using the 1D wavelet transform of the input spikes proved successful
in classification. PCA was used to reduce the dimension of features and
Hierarchical cluster was used for classification.
Interactive touch board using IR camera LabVIEW | Computer Vision
Implemented a novel and cost-effective system to convert a projected display on
a flat surface into interactive touch board, through the aid of computer vision
techniques. The image from the camera is processed using the developed
algorithm, which determines the coordinates of the IR light emitted by the stylus.
The camera is calibrated and perspective transformation matrix is obtained, which
is used to transform the coordinates of the IR point in image plane to the projector
plane.
https://goo.gl/iHYGkR
Automated Remote Telescope Observatory LabVIEW | Embedded | IoT
This project was aimed at building an automated optical telescope system that
could perform automated observations of the sky and at the same time could
remotely be controlled using internet. The system uses cRIO as the embedded
unit, which interfaces the internet and the motor control drivers. High torque
stepper motors are used for aligning the telescope to point in the intended
direction. The project prototype was presented at a National Embedded System
design contest conducted by National Instruments, India.
Line Following Robot Embedded C | 8051 (AT89c51) | Robotics
This is a three-wheeled line following robot that could detect and tract
black lines on white surface. The heart of the robot lies in the AT89c51
microcontroller, which is responsible for acquiring the data from the front
mounted line sensor, process them, take decisions and finally power the
motors. The line sensor consists of series of IR transceivers and the signals
from these sensors are amplified and digitized before feeding into the
controller. The tracking algorithm uses PID algorithm as a part of the
feedback loop.
https://goo.gl/1gKpbA