New digest every morning

Stop drowning in
ArXiv papers.

We read every trending paper on HuggingFace, pull structured reports from AlphaXiv, and synthesize one beautiful daily digest — delivered free to your inbox.

📄 15 papers/day ⏱ 5 min read 🔓 100% free
365
Digests published
5,000+
Papers summarized
< 5 min
Daily read time
Free
Forever
How it works

From 30+ papers to one perfect digest

Every day at 8 AM UTC, our pipeline runs automatically. You just open your email.

🤗

Fetch trending papers

We pull the top 15 daily papers from HuggingFace's trending feed — the ones the community is actually excited about.

📚

Get structured reports

Each paper is enriched via AlphaXiv's structured markdown reports — authors, methodology, key findings, impact. If unavailable, Gemini generates one from the abstract.

🧠

Synthesize the digest

A Gemini model reads all reports and writes one cohesive blog — identifying top papers, connecting themes, and highlighting what matters.

✉️

Deliver to your inbox

A beautifully formatted email with collapsible paper details, direct links to arXiv, GitHub, and project pages. One click to go deeper.

Preview

See what you'll get

Here's what a real PaperPulse digest looks like — from today's trending papers.

mail.google.com — PaperPulse

⚡ The "Streaming" Revolution in Multimodal AI

March 13, 2026 · 15 papers · 5 min read

Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training

▲ 53 Tencent Hunyuan

A framework that uses Test-Time Training to enable persistent 3D spatial memory over long video streams — achieving linear scaling where others hit memory limits.

IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse

▲ 27 Z.ai

Removes 75% of indexer computations in DeepSeek Sparse Attention by reusing top-k selections across layers — 1.82× prefill speedup.

Video-Based Reward Modeling for Computer-Use Agents

▲ 24 LIME NLP

Predicts agent task success from execution videos alone — outperforms GPT-5.2 and Gemini-3 Pro across Ubuntu, macOS, Windows, and Android.

DreamVideo-Omni: Omni-Motion Controlled Multi-Subject Video Customization

▲ 19 TongyiLab

Unified framework enabling precise multi-subject identity and multi-granularity motion control via progressive two-stage training.

Features

More than just summaries

PaperPulse connects papers, surfaces trends, and gives you everything needed to decide what's worth reading deeper.

🔗

Cross-paper themes

Every digest identifies 2-3 emerging themes and explicitly connects related papers — the insight layer you can't get from reading papers in isolation.

📊

Structured reports

Each paper comes with a full breakdown: methodology, key findings with specific metrics, significance, and limitations — not just a rehashed abstract.

💻

Direct links to code

We surface GitHub repos, project pages, and datasets. One click from the digest to the actual implementation.

5-minute read

The editorial blog covers the day's most important developments. Expand individual paper reports only when you want to go deeper.

🏢

Track the labs

See which organizations are publishing what. Spot trends from Meta, Google, Tencent, and university labs — before they become mainstream.

🌐

Fully open source

The entire pipeline — fetching, summarization, synthesis, delivery — is open source. Fork it, run it locally, contribute back.

Join the smartest inbox in AI

One email per day. No spam. Unsubscribe anytime.
Start with tomorrow's digest.