Can AI learn to explain its reasoning?

Today’s most powerful AI systems tend to be very poor at being able to explain how they arrive at their answers. This is largely due to the neural networks on which they’re based, and the deficiency might constitute a major obstruction to AI being adopted in many fields. Would you trust an AI medical system that recommends your liver be removed but can’t explain why? Should the military shoot at an AI-identified target, if no explanation can be given as to why it’s a threat? For a good article about the problem see: https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/