Search papers, labs, and topics across Lattice.
This paper introduces a networked cyber-physical system for smart agriculture that uses a Reinforcement Learning-based Digital Twin (DT) to minimize communication delays in controlling a robotic arm within a hydroponic greenhouse. The DT operates in three modes (Real2Digital, Digital2Real, and Digital2Digital) to provide bidirectional synchronization and predictive simulation. The key result is the demonstration of the system's effectiveness in enabling real-time remote control via a sensor-equipped wearable glove by using RL to anticipate hand motions and compensate for network latency.
An RL-powered digital twin can effectively mask network latency to enable near-real-time control of agricultural robots via wearable interfaces.
This study presents a networked cyber-physical architecture that integrates a Reinforcement Learning-based Digital Twin (DT) to enable zero-delay interaction between physical and digital components in smart agriculture. The proposed system allows real-time remote control of a robotic arm inside a hydroponic greenhouse, using a sensor-equipped Wearable Glove (SWG) for hand motion capture. The DT operates in three coordinated modes: Real2Digital, Digital2Real, and Digital2Digital, supporting bidirectional synchronization and predictive simulation. A core innovation lies in the use of a Reinforcement Learning model to anticipate hand motions, thereby compensating for network latency and enhancing the responsiveness of the virtual–physical interaction. The architecture was experimentally validated through a detailed communication delay analysis, covering sensing, data processing, network transmission, and 3D rendering. While results confirm the system’s effectiveness under typical conditions, performance may vary under unstable network scenarios. This work represents a promising step toward real-time adaptive DTs in complex smart greenhouse environments.