Can AI learn to explain its reasoning?

Today’s most powerful AI systems tend to be very poor at being able to explain how they arrive at their answers. This is largely due to the neural networks on which they’re based, and the deficiency might constitute a major obstruction to AI being adopted in many fields. Would you trust an AI medical system that recommends your liver be removed but can’t explain why? Should the military shoot at an AI-identified target, if no explanation can be given as to why it’s a threat? For a good article about the problem see:

Liberation Giveaway on Jan 26 – Feb 26!

Are you a fan? Don’t miss your chance to win a free copy of Liberation (Kindle edition) in the big Giveaway event running from Jan 26 – Feb 26.

Goodreads Book Giveaway

Liberation by Calvin J. Brown


by Calvin J. Brown

Giveaway ends February 26, 2018.

See the giveaway details
at Goodreads.

Enter Giveaway