Doctoral research thesis:

Computational imaging for robotic vision in visually challenging conditions

Current Research

Burst Feature Finder

Developing a burst feature finder that jointly searches keypoints in scale-slope space for improved robotic vision.

Burst imaging for light-constrained structure from motion

Images captured under extremely low light conditions are noise-limited, which can cause existing robotic vision algorithms to fail. We develop an image processing technique for aiding 3D reconstruction from images acquired in low light conditions. Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time images to improve the robustness and accuracy of feature-based structure-from-motion (SfM).

Research Collaboration:

Unsupervised learning of depth estimation and visual odometry for sparse light field cameras

While an exciting diversity of new imaging devices is emerging that could dramatically improve robotic perception, the challenges of calibrating and interpreting these cameras have limited their uptake in the robotics community. In this work we generalise techniques from unsupervised learning to allow a robot to autonomously interpret new kinds of cameras. We consider emerging sparse light field (LF) cameras, which capture a subset of the 4D LF function describing the set of light rays passing through a plane. We introduce a generalised encoding of sparse LFs that allows unsupervised learning of odometry and depth. 

Graduate Research:

Thailand 4.0, an adapted economic model of Industry 4.0, aims to unlock the country from several economic challenges through knowledge, technology, and innovation. In this research, we focus on designing and developing an industrial standard omnidirectional obstacle avoiding warehouse mobile robot. We enable autonomous navigation using simultaneous localization and mapping (SLAM) with a real-time obstacle recognition system using machine learning techniques. We perform analysis on reliability, complexity and sensor fusion technologies of low cost sensors and replicate all the possible functionalities of the industrial robot on a small-scale robot for educational studies. AutoCAD, SolidWorks, Arduino, Raspberry Pi, ROS, HTML, Python, MATLAB, TensorFlow and OpenCV were used during different stages of research and development.

Undergraduate Research:

Localization of a robot is a fundamental challenge in autonomous navigation, obstacle avoidance and motion tracking, especially in a slope. Recent advances in visual odometry allow robots to utilize sequential camera images to estimate the position of the robot relative to a known position. In this research, we develop a statistical model to calculate the velocity of a robot on an unknown terrain using visual odometry. We design and develop a mobile robot to be our controlled target and utilize a GoPro camera to get visual cues from the target. We operate our robot outdoors and with an aid of multiple computer vision techniques, measure the position of the robot. We calculate the velocity estimation from encoders and visual sensors and perform experimental analysis to derive statistical relationship between slip, friction and velocity of the robot. Our navigation simulator based on the statistical model exhibit similar performance to our robot.

Additional Research:

Malaria is a globally widespread mosquito-borne disease which is caused by Plasmodium parasite. Although blood films are stained for better visualisation through the microscope, the automatic classification is challenging due to the presence of all kinds of parasites in various orientations and other artefacts in the blood film. In this research, we determine the presence of the most critical parasite -Gametocyte stage of Plasmodium falciparum – in Giemsa-stained blood films using photomicrograph analysis method. Having extracted the parasite from the background of the image after a series of pre-processing operations, we use both K-nearest neighbors (K-NN) and Gaussian naïve Bayes classifiers. As the key element of the research, we utilize moment invariant features to make the input features invariant to translation, rotation and scale (TRS). As this application desires a higher true positive rate, we conclude Gaussian naïve Bayes as the qualified classifier based on leave-one-out cross-validation.

%d bloggers like this: