Google's research chief questions value of 'Explainable AI'
As machine learning and AI become more ubiquitous, there are growing calls for the technologies to explain themselves in human terms. Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures - including deep learning, neural networks and probabilistic graphical models - are incredibly complex and increasingly opaque.
No comments:
Post a Comment