Robust and Resilient AI

Robust and Resilient Artificial Intelligence

Developing intelligent systems for missions characterized by uncertain and adversarial environments

Our Contribution

Scientists and engineers in APL’s Intelligent Systems Center (ISC) work to enable confidence in intelligent systems for critical national security applications through research in uncertainty-aware risk sensitivity, adversarial vulnerabilities and defenses, fairness and privacy, and testing and evaluation.

Research

Uncertainty-Aware Risk-Sensitive AI

ISC researchers are developing fundamentally new techniques to enable AI to operate in a dynamic and unpredictable world. These include uncertainty-aware control policies that adapt to stochastic changes in operating conditions and out-of-distribution settings, as well as risk-sensitive deep reinforcement learning techniques that allow agents to prioritize competing mission objectives.

Related Publications


Adversarial Vulnerabilities and Defenses

TrojAI researcher Neil Fendley demonstrates a backdoor he embedded in the deep network weights of a common network used for object detection and classification. The network classifies dozens of objects correctly, but when a person puts the embedded trigger—in this case the black and white target sticker—on their clothes, the system immediately misidentifies them as a teddy bear. The backdoor is very specific: When placed on other objects—like the chair—the trigger has no impact, and the network makes correct classifications.

ISC researchers analyze vulnerabilities and defenses of critical AI applications relative to system-level performance and operational constraints across the entire development life cycle. Recent ISC projects studied the sensitivity of adversarial attacks on computer vision systems to physical constraints, general approaches for detecting adversarial inputs to deep learning models, methods for evaluating vulnerabilities to backdoor Trojan attacks at scale, and techniques for “sanitizing” deep networks infected by data poisoning.

Related Publications


Testing and Evaluation of Intelligent Systems

A core mission of the ISC is rigorous testing and evaluation of fundamentally new AI and autonomy to address critical national challenges, integrating APL’s trusted technical advisor role with a leading, interdisciplinary research program in AI, robotics, and autonomy. Novel datasets, benchmarks, metrics, and evaluation frameworks and tools are regularly released.

Related Publications

  • Johnson, E. C., E. Q. Nguyen, B. Schreurs, C. S. Ewulum, C. Ashcraft, N. M. Fendley, M. M. Baker, A. New, G. K. Vallabha, “L2Explorer: A Lifelong Reinforcement Learning Assessment Environment,” AAAI Spring Symposium 2022 (Designing AI for Open Worlds) (2022).
  • Fendley, N., C. Costello, E. Nguyen, G. Perrotta, C. Lowman, “Continual Reinforcement Learning with TELLA,” Conference on Lifelong Learning Agents (CoLLAs) 2022, arxiv:2208.04287. (2022).
  • New, A., M. Baker, E. Nguyen, G. Vallabha, “Lifelong Learning Metrics,” arXiv:2201.08278. (2022).

AI Fairness and Privacy

Ensuring that intelligent systems are unbiased and maintain data privacy is yet another critical obstacle for realizing the potential of AI for national challenges.

Related Publications

  • Paul, William, Armin Hadzic, Neil Joshi, Fady Alajaji, Philippe Burlina, “TARA: Training and Representation Alteration for AI Fairness and Domain Generalization." Neural Computation, pp. 1–38 (2022).
  • Paul, William, Yinzhi Cao, Miaomiao Zhang, Phil Burlina, “Defending Medical Image Diagnostics Against Privacy Attacks Using Generative Methods: Application to Retinal Diagnostics," Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning, pp. 174–187. Springer, Cham (2021).