Search papers, labs, and topics across Lattice.
3
0
4
23
Representing complex 3D biomedical graphs as learned tokens unlocks generative modeling and efficient analysis of anatomical structures.
Ditch fixed-size 3D blocks: SigVLP uses rotary embeddings to let vision-language models handle CT volumes with variable slice counts, unlocking better pre-training.
VariViT lets you train vision transformers on variable-sized images without resizing, boosting accuracy on medical imaging tasks by better preserving irregularly shaped structures.