Search papers, labs, and topics across Lattice.
This paper introduces an adaptive force control framework for robotic sample scraping, combining a low-level Cartesian impedance controller with a high-level reinforcement learning agent that dynamically adjusts interaction forces based on perception feedback. The agent learns to optimize contact wrench for scraping heterogeneous materials modeled as spheres with varying dislodgement force thresholds in a simulated environment. The learned policy is then successfully transferred to a real Franka Research 3 robot, outperforming a fixed-wrench baseline by 10.9% across five material setups.
Robots can now scrape vials like a human chemist, thanks to a reinforcement learning policy that adapts force in real-time based on visual feedback.
The increasing demand for accelerated scientific discovery, driven by global challenges, highlights the need for advanced AI-driven robotics. Deploying robotic chemists in human-centric labs is key for the next horizon of autonomous discovery, as complex tasks still demand the dexterity of human scientists. Robotic manipulation in this context is uniquely challenged by handling diverse chemicals (granular, powdery, or viscous liquids), under varying lab conditions. For example, humans use spatulas for scraping materials from vial walls. Automating this process is challenging because it goes beyond simple robotic insertion tasks and traditional lab automation, requiring the execution of fine-granular movements within a constrained environment (the sample vial). Our work proposes an adaptive control framework to address this, relying on a low-level Cartesian impedance controller for stable and compliant physical interaction and a high-level reinforcement learning agent that learns to dynamically adjust interaction forces at the end-effector. The agent is guided by perception feedback, which provides the material's location. We first created a task-representative simulation environment with a Franka Research 3 robot, a scraping tool, and a sample vial containing heterogeneous materials. To facilitate the learning of an adaptive policy and model diverse characteristics, the sample is modelled as a collection of spheres, where each sphere is assigned a unique dislodgement force threshold, which is procedurally generated using Perlin noise. We train an agent to autonomously learn and adapt the optimal contact wrench for a sample scraping task in simulation and then successfully transfer this policy to a real robotic setup. Our method was evaluated across five different material setups, outperforming a fixed-wrench baseline by an average of 10.9%.