Search papers, labs, and topics across Lattice.
University of Notre Dame
2
0
5
LLMs can generate better unit tests for complex code by iteratively removing already-covered code, simplifying the generation task and boosting coverage beyond state-of-the-art methods.
Safe RL policies, designed to avoid unsafe actions, can be effectively attacked using a novel framework that learns safety constraints from demonstrations and then crafts adversarial perturbations, even without access to the target policy's gradients.