Search papers, labs, and topics across Lattice.
This paper examines the resilience of embodied AI systems deployed in critical infrastructure, arguing that their ability to handle unexpected cascading failures depends on bounded autonomy within a hybrid governance architecture. It proposes four oversight modes tailored to different infrastructure sectors based on task complexity, risk, and consequence severity. The analysis draws on the EU AI Act, ISO safety standards, and crisis management research to advocate for a structured allocation of machine capability and human judgment.
Hybrid governance, combining bounded AI autonomy with human oversight, emerges as crucial for ensuring the resilience of embodied AI in critical infrastructure against cascading failures.
Critical infrastructure increasingly incorporates embodied AI for monitoring, predictive maintenance, and decision support. However, AI systems designed to handle statistically representable uncertainty struggle with cascading failures and crisis dynamics that exceed their training assumptions. This paper argues that Embodied AIs resilience depends on bounded autonomy within a hybrid governance architecture. We outline four oversight modes and map them to critical infrastructure sectors based on task complexity, risk level, and consequence severity. Drawing on the EU AI Act, ISO safety standards, and crisis management research, we argue that effective governance requires a structured allocation of machine capability and human judgement.