This proposal is to develop machine learning techniques to detect malicious intent in autonomous vehicles (like aerial and ground mobile systems) that are compromised or under the influence of cyber attacks.
Recent events have demonstrated that attackers are interested in coordinated attacks that can take over and hijack UAVs, automobiles, and other autonomous vehicles toward undesired states.
Attacks with the only intent of interdicting and stopping a system are usually easier to detect because they manifest as faulty systems.
Coordinated attacks are usually stealthy, making the system appear as if it is operating normally.
We propose to develop new techniques to identify malicious intent of CPS such as drones or autonomous vehicles under varying operating assumptions and resource constraints.
Our approach is to detect and estimate the state of the system, using perturbations to gather additional information.
We call this approach inverse planning: using observations of action and responses to new and controlled stimuli to infer the objective of an autonomous system that is under compromised by under partial control.
We propose to use an inverse reinforcement learning and sequential active learning framework in which the intent of an agent is inferred from the actions it chooses in various states.