Computer Vision vs Robotic Imaging

Computer vision, back in the 1960s was introduced by researchers to to gain high-level understanding on digital images and/or videos to automate tasks that the human visual systems can do. While there is no problem in interpreting subtle variations in lighting conditions of the environment for us, computers found tasks like object segmentation from the background difficult [1]. Computer vision has covered quite a lot of interesting vision problems with interdisciplinary researchers from diverse fields including medical experts, artists, engineers, photographers, film makers and astrophysicists. Since most of these technical fields are closely related to each other, and there are experts in each domain with good knowledge on diverse topics, computer vision solutions has been used as a plug-in solutions, especially in robotic environment from the introduction of vision systems to robotics.

Computer vision techniques are generally off-line version, where we already have an image dataset, and we use a computer or a mobile phone (embedded systems) to understand the data and manipulate the data to get the required output such as segmentation, feature detection, image processing and/or AI-based image corrections. Computer vision applications focus on producing results similar to human visual systems. Robotic vision on the other hand uses computer vision techniques on robots to make robots see better [2]. While these are also embedded systems when it comes to programming, it is important to modify computer vision techniques such that robotic applications get useful information for localization and mapping. This has been a closely related topic and most of the time the terminologies are used interchangeably.

A robot doesn’t necessarily have to see the environment like us because they have the ability to use a variety of sensors to process information far above and beyond the visual spectrum of a human eye. While robotic vision adapts. techniques from computer vision, pattern recognition and image processing domain to improve captured imagery for applications, robotic imaging is inclined towards adapting computational imaging techniques for robots. So instead of only focusing on generating better outputs from given inputs, robotic imaging explores opportunities in generating better images to achieve better results.

We capture datasets of similar nature for training AI models to avoid anomalies or use the same camera with similar settings to capture diverse scenes to generalise the model in computer vision. However, if we can actively take advantage of the control systems, complimentary sensors and the environment to capture images, the chances of accurate reconstruction improves, leading to a customised solution for the specific robot in a specific environment with specific settings.

Mars rovers are programmed to capture images and roam around in space. In situations like that, it is important to save energy, avoid complex computation while capturing enough data to understand the unknowns. In a robot with multiple sensors and a control system, it is possible to provide commands like capture images only if the temperature is above a specific threshold and while we capture the images, move in a specific direction at a desired speed. These robots also have high-end state-of-the-art cameras. Opportunity rover (2003-2019) used panoramic cameras to capture the Martian surface, the sun and the sky. It is a dual camera working together to capture multi-wavelength, 3-D panoramic pictures. The recent Mars rover, Perseverance has a subsurface radar, X-ray spectrometer, ultraviolet spectrometer, laser micro imager and zoomable panoramic cameras which provide additional complementary information to understand the terrain better.

In the exciting times of being alive with humanity taking responsibilities and improving the world we live upon to avoid climate crisis and space-travels becoming a reality, we have to divide and conquer the research opportunities to improve the life ahead of us. We are not bound anymore to a given dataset in robotics, but have our freedom to capture images as we want to include all information we desire in a frame while saving energy and making more precise reconstruction.

References

  1. Szeliski, R., 2010. Computer vision: algorithms and applications. Springer Science & Business Media.
  2. Corke, P.I. and Khatib, O., 2011. Robotics, vision and control: fundamental algorithms in MATLAB, 73(2). Berlin: Springer.
  3. mars.nasa.gov. 2022. Cameras | Rover – NASA’s Mars Exploration Program
%d bloggers like this: