A Framework for Developing Trust in Artificial Intelligence
Artificial intelligence (AI) has become a critical technology and central topic of discussion due to its well-established success in a wide array of applications. Success, however, has flagged due to the difficulty in ensuring humans can intervene in AI algorithms and understand how they operate in complex systems. Whenever applications operate completely autonomously, in conjunction with humans or in situations that potentially have significant impact on an external environment, a set of best practices must be in place to minimize the chance of adverse consequences. By breaking down and highlighting the challenges of each AI development phase, policymakers can see what aspects of trusted AI relate to their domain and how to achieve their vision of an increasingly AI-enabled future.
Authors: Dr. Philip Slingerland, Lauren Perry