Search papers, labs, and topics across Lattice.
Everything you need to know about AI research and how Lattice helps you stay current.
Lattice provides AI-generated summaries of every paper we track — including the key contribution, methodology, and why it matters — so you can stay informed in seconds. You can also subscribe to a daily or weekly email digest that delivers the most important papers straight to your inbox.
Lattice tracks 20+ leading AI research labs including OpenAI, Google DeepMind, Anthropic, Meta AI, Microsoft Research, Apple, NVIDIA, Stanford, MIT, UC Berkeley, and many more. Visit the Labs page to see what each lab has been publishing and their most active research areas.
AI research moves fast — new breakthroughs emerge every week across safety, capabilities, infrastructure, and applications. The Digest page highlights the most-read and most-cited papers from the past 7 days, and the Topics page shows which research areas are accelerating.
Each lab has a dedicated page on Lattice showing their recent publications, most active research topics, and publication trends. Browse the Labs page to see what OpenAI, DeepMind, Anthropic, and other labs are actively researching — from large language models to robotics to AI safety.
AI safety research focuses on ensuring AI systems behave reliably, remain aligned with human values, and avoid harmful outcomes. This includes work on alignment, interpretability, robustness, and governance. Lattice tracks safety research as one of four major categories — explore it on the Topics page.
Lattice aggregates papers from Semantic Scholar, arXiv, OpenAlex, and CrossRef daily at 6am UTC. You can browse by lab, topic, or time range on the Papers page, or use the search bar (Cmd+K) to find specific papers by title or abstract.
Lattice organizes papers across 24 research topics in four categories: Safety & Alignment, Capabilities, Infrastructure, and Applications. Visit the Topics page to browse by area, see trend charts, and read weekly topic recaps.
The Digest page shows breakout topics — research areas that are accelerating compared to the previous week. The Topics page displays sparkline charts and acceleration badges so you can see which areas are gaining momentum.
Thousands of AI papers are posted to preprint servers like arXiv every week. Lattice ingests and categorizes 50-250 papers daily from the top labs and research groups, filtering for quality and relevance so you see the papers that matter most.
Lattice is a free AI research intelligence dashboard that tracks what the world's top AI labs are publishing daily. It provides AI-powered summaries, trend charts, lab activity tracking, and a weekly digest — making it easy for researchers, engineers, and anyone interested in AI to stay current.
Lattice runs an automated ingestion pipeline every day at 6am UTC. It searches Semantic Scholar for new papers matching 24 research topics, enriches them with author affiliations from OpenAlex and arXiv, generates AI summaries using Gemini 2.0 Flash, and matches papers to tracked labs using 450+ organization matchers.
Lattice aggregates data from multiple academic sources: Semantic Scholar for paper metadata and citations, arXiv for preprints and author affiliations, OpenAlex for institutional data, and CrossRef for DOI resolution. Social signals come from Hugging Face upvotes and Hacker News.
Each paper is summarized by Gemini 2.0 Flash via OpenRouter. The AI generates four fields: a general summary, the key contribution, the methodology, and a plain-English explanation of why the paper matters. These summaries help you quickly assess whether a paper is relevant to you.
Lattice tracks 20+ curated labs by default — including OpenAI, Google DeepMind, Anthropic, Meta AI, Microsoft Research, Apple, NVIDIA, Stanford HAI, MIT, UC Berkeley, CMU, and more. You can also customize your feed by selecting from 100+ additional institutions using the lab selector on the homepage.
Lattice covers 24 research topics across four categories: Safety & Alignment (alignment, interpretability, robustness, governance), Capabilities (language models, reasoning, multimodal, agents), Infrastructure (training methods, hardware, data, evaluation), and Applications (healthcare, science, robotics, education). Browse them all on the Topics page.
Lattice ingests new papers every day at 6am UTC via an automated pipeline. The digest, trend charts, and lab activity stats are refreshed daily. Weekly email digests go out once a week with the top papers and trends.
Yes — Lattice offers both a daily brief (the 5 most-cited papers each morning) and a weekly digest (top papers, most active labs, breakout topics, and personalized recommendations). Subscribe on the homepage or the Digest page.
Absolutely. Click the “Customize your labs” button on the homepage to open the lab selector. You can choose from 100+ research institutions — universities, corporate labs, and government agencies — to build a personalized feed. Your selections are saved locally and used to personalize your digest emails too.
Yes, Lattice is completely free. All papers, summaries, trend data, and email digests are available at no cost. No account required.
AI safety is the broad field of ensuring AI systems don't cause harm — covering robustness, security, misuse prevention, and more. AI alignment is a subset focused specifically on making AI systems pursue the goals their creators intended. All alignment work is safety work, but not all safety work is alignment.
A preprint is a research paper shared publicly before peer review. Most AI research appears first on arXiv (arxiv.org), a free preprint server. Preprints allow researchers to share findings quickly — peer-reviewed publication in journals or conferences can take months longer.
A preprint is self-published by the authors and has not undergone formal peer review. A peer-reviewed paper has been evaluated by independent experts and accepted at a journal or conference. Most AI papers on Lattice are preprints from arXiv, though many are later published at top venues like NeurIPS, ICML, and ICLR.
Large language models (LLMs) are AI systems trained on vast amounts of text data to understand and generate language. They work by predicting the next token in a sequence, learning patterns of grammar, facts, and reasoning from training data. Examples include GPT-4, Claude, Gemini, and Llama. Learn more in the glossary.
A benchmark is a standardized test used to measure AI system performance. Benchmarks like MMLU (general knowledge), HumanEval (coding), and ARC (reasoning) let researchers compare different models objectively. The Topics page tracks papers about new benchmarks and evaluation methods.
Open-source AI models (like Llama, Mistral, and Qwen) release their model weights publicly so anyone can use, fine-tune, and inspect them. Closed-source models (like GPT-4, Claude, and Gemini) are only accessible via APIs — their weights and training details are proprietary.
The most active areas shift regularly. Visit the Topics page to see real-time trend data — sparkline charts show paper volume over time, and acceleration badges highlight which topics are gaining momentum this week.
Most AI papers on arXiv include a BibTeX citation on their abstract page. Click through to any paper on the Papers page to find the arXiv link, then use the “Export BibTeX” button on arXiv. For conference papers, use the citation format specified by the venue (e.g., NeurIPS, ICML).
Still have questions? Reach out on X.
Looking for AI terminology? Check out the glossary.