Author: Manu Somasagar Kamalakar
Artificial Intelligence (AI) and Machine Learning (ML) are being used in almost every field as the world is moving towards digitalization. AI is presently not what's to come, it is already here and it is playing a significant role in our day-to-day life. However, the question is to what extent we can trust these systems.
Humans Want to Understand – AI Mechanisms Too
If we are given an accurate model, why don't we just trust the model and ignore why it made a certain decision? – »One problem is that a single metric, such as classification accuracy, is an incomplete description of most real-world tasks.« [1]. The human mind is generally curious and wants to understand the reason for the decision by the model.
Since the beginning of AI, its central idea has been based on a mathematical simulation of the human learning process [2]. However, even after more than 60 years after its emergence, its answers do not satisfy us 100 percent. Explainability does not have the same relevance for all types of problems (this will play a bigger role in future blogposts). For example, an ordinary person is fine with not knowing why Amazon suggests her a specific movie, or why she always receives advertisements related to some specific topics. On the other hand, after receiving a very unfavorable diagnosis about her health, she would be very interested in knowing why. Closely related to learning is the human desire to find meaning in the world.