AI explainability is like The goose that lays the golden egg.
Although I’m frankly bad at playing guitar, I can bang on it from time to time. Let’s say that, when I pick one up, I can interpret it to a certain extent. That is, I can understand that if I press here or there, one sound or another comes out. I mean, I can use it. But I can’t explain it. That is to say, I don’t understand why one sound or another comes out. And honestly, I’m not very interested in it. I mean, I am interested in theory. But in practice, no more than in Peaky Blinders. So in the end I watch Peaky Blinders instead of studying the physics of vibration and sound. This same thing happens to me with the TV, the airplane, or my garage door remote. I can interpret or use them, but not explain them. This is of top interest to you. And look why. Because if you go to Lancet, NEJM or JAMA right now, you will see a lot of articles talking about the need for AI to be explainable. Those articles have been written by people who know a lot about medical science and little about AI. Because if you take a 3-hour YouTube course on AI, you will understand why AI is, by definition, unexplainable. That is, if you could explain why you arrive at some variables and not others, then it would be normal statistics, not the one you do with machine learning. It would be like killing the goose of the golden eggs. A great tale. All this does not mean that what comes out of that black box should not be interpretable or usable. In other words, it should give you variables that you can manipulate, judge and add or remove. The AI should not treat you as if you were dumb, just as a person who does not have the physical time to get to the bottom of the why, since your human existence is limited. Well, there are also articles talking about this concept of interpretability. They are from people who are more knowledgeable. And even, in those same magazines, they are already beginning to admit that explainability is an illusion. So that’s it. Happy day. |