Explainable AI, the new technology that argues its decisions to the user

It represents an area of growing interest in the field of artificial intelligence itself and machine learning

Artificial Intelligence (AI) has long been capable of making decisions based on data, but now a new system based on the principle of explainable AI goes a step further to guide the user towards achieving a goal, arguing the decisions it is making.

The new system, announced by Fujitsu Laboratories and the University of Hokkaido (Japan), automatically connects users with the necessary steps to achieve a desired result, based on information from AI on the data, such as medical reviews.

Explainable AI represents an area of ​​growing interest in the field of artificial intelligence and machine learning. Although AI technologies can make decisions automatically from data, this novelty also provides individual reasons for these decisions, helping to avoid the so-called black box phenomenon, in which AI reaches conclusions through unclear and potentially problematic means.

Although certain techniques can also provide hypothetical improvements that could be obtained when an undesirable result occurs, they do not provide any concrete steps to improve.

This new technology offers the possibility of improving the transparency and reliability of the decisions made by Artificial Intelligence, which will allow more people in the future to interact with the technologies that use it, offering a feeling of trust and tranquility, as assured Fujitsu in a statement.

Source: dpa

You might also like