Search papers, labs, and topics across Lattice.
1
0
3
LLMs can be better aligned to human values by fusing the outputs of multiple "moral agents" representing diverse ethical perspectives, outperforming single-agent approaches.