Search papers, labs, and topics across Lattice.
This paper introduces a wireless iBCI headstage that adaptively adjusts ADC sampling rates based on server-learned, electrode-specific optimization. By optimizing at the source (ADC) level rather than through post-digitization compression, the system minimizes data volume and power consumption. Experiments show a 40mW power reduction and 3.2x FPGA resource utilization decrease without sacrificing decoding accuracy in motor and visual tasks.
A server-driven adaptive sampling approach slashes power consumption in wireless iBCIs by 40mW while *improving* decoding accuracy.
Implantable Brain-Computer Interfaces (iBCIs) are increasingly pivotal in clinical and daily applications. However, wireless iBCIs face severe constraints in power consumption and data throughput. To mitigate these bottlenecks, we propose a wireless iBCI headstage featuring adaptive ADC sampling and spike detection. Distinguishing our design from traditional application-layer compression, we employ a server-driven architecture that achieves source-level efficiency. Specifically, the server learns an optimal, electrode-specific sample rate vector to dynamically reconfigure the ADC hardware. This strategy reduces data volume directly at the acquisition layer (ADC and amplifier) rather than relying on computationally intensive post-digitization processing. Extensive experiments across diverse subjects and arrays demonstrate a power reduction of up to 40 mW and a 3.2$\times$ decrease in FPGA resource utilization, all while maintaining or exceeding decoding accuracy in both motor and visual tasks. This design offers a highly practical solution for long-term in-vivo recording.Our prototype is open-sourced in: https://github.com/liuhongyao99cs/32-Channel-Wireless-BCI-Headstage.