Explainable Artificial Intelligence (XAI)

Artificial intelligence has been long considered as a black box in machine learning which is used currently in many innovative applications but why it does what it does is generally an intuitive explanation or expectation instead of a clear mathematical equation. This has been one of the drawbacks of AI when it comes to robotic applications that require higher precision data with minimum false detections.

Some applications like autonomous cars, space robots and/or surgical robots are not allowed to make mistakes because it can cost much more valuable resources and lives. Explainable AI (XAI) is an implementation of the explanation as to why learning networks generate results in a specific manner, so it can also be used over different domains. XAI can not only improve user experience, but can also be utilised to make better decisions if we understand what is going on. The objective of XAI is to confirm existing knowledge, challenge existing knowledge and generate new assumptions based on existing knowledge.

While machine learning algorithms are considered both black-box model and white-box models based on the understandability of experts in the domain, XAI algorithms aim to ensure transparency, interpretability and explainability. XAI algorithms describe the relationship between the intuitive thinking behind learning and decision-making while providing justification to the contribution of input images and learning approaches to output images. While explainability can be considered as the collection of features of the interpretable domain in the model, in robotics this space is important to explore. A robot moving in a scene can introduce a lot of obstacles, visual challenges and scene variations and can be leveraged to perform efficient reconstruction.

Domains in which robots are being deployed based on the trust on algorithms such as space, underwater, medicine and defence are more inclined towards classical approaches currently and given enough efforts in XAI, artificial intelligence can also be implemented on such vigilant domains, boosting future research avenues in robotics and AI. Robots with AI technology optimise it’s behaviour to satisfy mathematical systems.

For example, if we ask a robot to learn images captured underwater so it can detect and count the fish, AI might learn useful data such as motion of fish, say, similar moving pixels can correspond to a fish. AI may also learn the colour of fish as a feature defining all kinds of fish which might be undesirable if it is not true in generalising the system. ML scientists are currently working on auditing such rules to generalise the robotic vision system and improve the trust over robots and AI systems in general.

Some techniques used in robotic vision algorithms allow visualisation of inputs to which neurons respond strongly, which some researchers are aggregating into circuits for human-comprehensible functions. There are also other promising research studies on analysing feature representation using standard cluster techniques, training networks to output linguistic explanation of their behaviour and evaluating the influence of training inputs on behaviours in XAI.

As AI attracts more sectors and robots become more lively everyday, official bodies have been established to keep AI accountable for decision making and ensuring trust. The European Union has introduced a right to explanation in general data protection right (GDPR) as an attempt to understand the potential problems from rising algorithms. In the future, we can expect more established organisations to regulate AI ethics as it involves not only in the high technology around us, but in our own lives.


  1. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. and Yang, G.Z., 2019. XAI—Explainable artificial intelligence. Science Robotics4(37).
  2. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R. and Chatila, R., 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion58, pp. 82-115.
  3. Gunning, D. and Aha, D., 2019. DARPA’s explainable artificial intelligence (XAI) program. AI Magazine40(2), pp. 44-58.
%d bloggers like this: