Search papers, labs, and topics across Lattice.
2
0
5
On-device fine-tuning of Transformers is now feasible on ultra-low-power, memory-constrained edge devices thanks to TrainDeeploy, which achieves up to 11 trained images per second on a RISC-V SoC.
You can now train Gaussian Splatting models on your edge device, thanks to a clever optimization that slashes memory use by 8x and speeds up training by 4x, all without sacrificing reconstruction quality.