Multi-view 3D reconstruction remains a core challenge in computer vision. Recent methods, such as DUSt3R and its successors, directly regress pointmaps from image pairs without relying on known scene geometry or camera parameters. However, the performance of these models is constrained by the diversity and scale of available training data. In this work, we introduce Puzzles, a data augmentation strategy that synthesizes an unbounded volume of high-quality, posed video-depth data from just a single image or video clip. By simulating diverse camera trajectories and realistic scene geometry through targeted image transformations, Puzzles significantly enhances data variety. Extensive experiments show that integrating Puzzles into existing video-based 3D reconstruction pipelines consistently boosts performance, all without modifying the underlying network architecture. Notably, models trained on only 10% of the original data, augmented with Puzzles, still achieve accuracy comparable to those trained on the full dataset.
Image-to-Clips. (A) Starting from a single RGB-D image, we (B) partition it into ordered, overlapping patches, (C) simulate diverse viewpoints by calibrating virtual camera poses, and (D) generate augmented, posed images with aligned depth maps for use in 3D reconstruction.
Clips-to-Clips. (A) We begin by uniformly sampling frames from a video. (B) A pair-wise overlap matrix is computed to measure frame redundancy, with overlap visualized in purple and overlap ratios annotated in red. (C) Low-redundancy keyframes are then selected, and diverse sub-clips are synthesized from them using the Image-to-Clips method.
Example of Image-to-Clips. From one image (left), the method samples overlapping patches and assigns 6-DoF camera paths to craft view-consistent clips (right), turning a single frame into diverse, realistic training sequences across human, indoor, and outdoor scenes.
Example of Clips-to-Clips. Top: Consecutive frames from the original training clip. Middle: Selected Keyframes with corresponding patch selections. Bottom: Synthetic video clips generated from keyframes using the Image-to-Clips method.
Demonstrates how the Image-to-Clips method samples overlapping patches (colored regions) from a single image
and generates diverse video clips, producing diverse training sequences.
Side-by-side comparison on real-world captured video: standard 3D reconstruction (left) vs. Puzzles-augmented result (right),
showcasing richer structural detail and more complete geometry. Click the arrow button for extra comparison.
Method | w/ Puzzles | Data | 7Scenes | NRGBD | DTU | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Acc ↓ | Comp ↓ | Acc ↓ | Comp ↓ | Acc ↓ | Comp ↓ | |||||||||
Value | Δ (%) | Value | Δ (%) | Value | Δ (%) | Value | Δ (%) | Value | Δ (%) | Value | Δ (%) | |||
Spann3R | - | full | 0.0388 | 0.0253 | 0.0686 | 0.0315 | 6.2432 | 3.1259 | ||||||
✓ | 1/10 | 0.0389 | -0.26 | 0.0248 | +1.98 | 0.0753 | -9.79 | 0.0341 | -8.50 | 4.9832 | +20.18 | 2.5172 | +19.47 | |
✓ | full | 0.0330 | +14.94 | 0.0224 | +11.46 | 0.0644 | +6.00 | 0.0291 | +7.51 | 5.0004 | +19.90 | 2.5113 | +19.66 | |
Fast3R | - | full | 0.0412 | 0.0275 | 0.0735 | 0.0287 | 4.2961 | 2.0681 | ||||||
✓ | 1/10 | 0.0402 | +2.30 | 0.0272 | +1.09 | 0.0772 | -5.11 | 0.0295 | -2.78 | 3.7174 | +13.47 | 1.8941 | +8.41 | |
✓ | full | 0.0342 | +16.99 | 0.0239 | +13.09 | 0.0684 | +6.94 | 0.0259 | +9.75 | 3.5912 | +16.41 | 1.7379 | +15.96 | |
SLAM3R | - | full | 0.0291 | 0.0245 | 0.0481 | 0.0292 | 4.3820 | 2.4754 | ||||||
✓ | 1/10 | 0.0289 | +0.68 | 0.0237 | +3.26 | 0.0493 | -2.49 | 0.0313 | -7.19 | 3.5980 | +17.89 | 2.0891 | +15.60 | |
✓ | full | 0.0264 | +9.27 | 0.0218 | +11.02 | 0.0439 | +8.73 | 0.0263 | +9.93 | 3.6497 | +16.71 | 2.0762 | +16.12 |
To evaluate the impact of Puzzles augmentation, we retrain three video-based 3R-series baselines—Spann3R, SLAM3R, and Fast3R—on unified dataset (which differs from their original training data), using each model's published architecture and training protocol, both with and without Puzzles. Reported metrics may therefore diverge from the papers; our goal is to measure how each model reacts to a new data distribution and augmentation.
Coming soon...