Real-Time Anomaly Detection and Reactive Planning with Large Language Models
RSS• 2024
Abstract
Foundation models, e.g., large language models (LLMs), trained on
internet-scale data possess zero-shot generalization capabilities that make
them a promising technology towards detecting and mitigating
out-of-distribution failure modes of robotic systems. Fully realizing this
promise, however, poses two challenges: (i) mitigating the considerable
computational expense of these models such that they may be applied online, and
(ii) incorporating their judgement regarding potential anomalies into a safe
control framework. In this work, we present a two-stage reasoning framework:
First is a fast binary anomaly classifier that analyzes observations in an LLM
embedding space, which may then trigger a slower fallback selection stage that
utilizes the reasoning capabilities of generative LLMs. These stages correspond
to branch points in a model predictive control strategy that maintains the
joint feasibility of continuing along various fallback plans to account for the
slow reasoner's latency as soon as an anomaly is detected, thus ensuring
safety. We show that our fast anomaly classifier outperforms autoregressive
reasoning with state-of-the-art GPT models, even when instantiated with
relatively small language models. This enables our runtime monitor to improve
the trustworthiness of dynamic robotic systems, such as quadrotors or
autonomous vehicles, under resource and time constraints. Videos illustrating
our approach in both simulation and real-world experiments are available on
this project page: https://sites.google.com/view/aesop-llm.