Search papers, labs, and topics across Lattice.
2
0
3
Ditch fixed-size 3D blocks: SigVLP uses rotary embeddings to let vision-language models handle CT volumes with variable slice counts, unlocking better pre-training.
VariViT lets you train vision transformers on variable-sized images without resizing, boosting accuracy on medical imaging tasks by better preserving irregularly shaped structures.