Formal Assurance for Cooperative Intelligent Autonomous Agents
- Additional Document Info
- View All
2018, Springer International Publishing AG, part of Springer Nature. Developing trust in intelligent agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
author list (cited authors)
Bhattacharyya, S., Eskridge, T. C., Neogi, N. A., Carvalho, M., & Stafford, M.
complete list of authors
Bhattacharyya, Siddhartha||Eskridge, Thomas C||Neogi, Natasha A||Carvalho, Marco||Stafford, Milton