Search papers, labs, and topics across Lattice.
World2Rules, a neuro-symbolic framework, learns safety rules for aviation by combining neural models for extracting candidate facts from multimodal data with inductive logic programming for verification. It uses hierarchical reflective reasoning to ensure consistency across examples and rules, mitigating noise from neural extractions. Experiments on aviation safety data demonstrate that World2Rules achieves significantly higher F1 scores compared to purely neural and single-pass neuro-symbolic baselines, while producing interpretable first-order logic rules.
Learning interpretable safety rules from noisy, real-world data is now possible, outperforming purely neural or simpler neuro-symbolic approaches by a large margin.
Many real-world safety-critical systems are governed by explicit rules that define unsafe world configurations and constrain agent interactions. In practice, these rules are complex and context-dependent, making manual specification incomplete and error-prone. Learning such rules from real-world multimodal data is further challenged by noise, inconsistency, and sparse failure cases. Neural models can extract structure from text and visual data but lack formal guarantees, while symbolic methods provide verifiability yet are brittle when applied directly to imperfect observations. We present World2Rules, a neuro-symbolic framework for learning world-governing safety rules from real-world multimodal aviation data. World2Rules learns from both nominal operational data and aviation crash and incident reports, treating neural models as proposal mechanisms for candidate symbolic facts and inductive logic programming as a verification layer. The framework employs hierarchical reflective reasoning, enforcing consistency across examples, subsets, and rules to filter unreliable evidence, aggregate only mutually consistent components, and prune unsupported hypotheses. This design limits error propagation from noisy neural extractions and yields compact, interpretable first-order logic rules that characterize unsafe world configurations. We evaluate World2Rules on real-world aviation safety data and show that it learns rules that achieve 23.6% higher F1 score than purely neural and 43.2% higher F1 score than single-pass neuro-symbolic baseline, while remaining suitable for safety-critical reasoning and formal analysis.