Search papers, labs, and topics across Lattice.
1
0
3
Text-to-video generation gets a serious speed boost: CHAI slashes inference time by up to 3.35x without sacrificing video quality, thanks to its clever cross-inference caching.