Search papers, labs, and topics across Lattice.
The paper introduces a human-in-the-loop optimization (HILO) approach for personalizing active prosthesis controllers by directly incorporating user preferences. They employ preference-based Multiobjective Bayesian Optimization, using a novel acquisition function tailored for preference learning, and present two algorithmic variants: EUBO-LineCoSpar (discrete) and BPE4Prost (continuous). Results from simulations and real-application trials demonstrate efficient convergence, robust preference elicitation, and measurable biomechanical improvements, highlighting the method's potential for user-centered prosthesis control.
Active prostheses can now be tuned more efficiently and effectively by directly optimizing for user preferences using Bayesian optimization, leading to measurable biomechanical improvements.
Tuning active prostheses for people with amputation is time-consuming and relies on metrics that may not fully reflect user needs. We introduce a human-in-the-loop optimization (HILO) approach that leverages direct user preferences to personalize a standard four-parameter prosthesis controller efficiently. Our method employs preference-based Multiobjective Bayesian Optimization that uses a state-or-the-art acquisition function especially designed for preference learning, and includes two algorithmic variants: a discrete version (\textit{EUBO-LineCoSpar}), and a continuous version (\textit{BPE4Prost}). Simulation results on benchmark functions and real-application trials demonstrate efficient convergence, robust preference elicitation, and measurable biomechanical improvements, illustrating the potential of preference-driven tuning for user-centered prosthesis control.