Puzzles: Unbounded Video-Depth Augmentation for

Scalable End-to-End 3D Reconstruction

1Australian National University, 2Griffith University, 3CSIRO's Data61

Puzzles augments 3D reconstruction training by generating diverse novel viewpoints and realistic camera trajectories from single images or video clips, eliminating redundant frames.

Abstract

Multi-view 3D reconstruction remains a core challenge in computer vision. Recent methods, such as DUSt3R and its successors, directly regress pointmaps from image pairs without relying on known scene geometry or camera parameters. However, the performance of these models is constrained by the diversity and scale of available training data. In this work, we introduce Puzzles, a data augmentation strategy that synthesizes an unbounded volume of high-quality, posed video-depth data from just a single image or video clip. By simulating diverse camera trajectories and realistic scene geometry through targeted image transformations, Puzzles significantly enhances data variety. Extensive experiments show that integrating Puzzles into existing video-based 3D reconstruction pipelines consistently boosts performance, all without modifying the underlying network architecture. Notably, models trained on only 10% of the original data, augmented with Puzzles, still achieve accuracy comparable to those trained on the full dataset.

Method

Image-to-Clips. (A) Starting from a single RGB-D image, we (B) partition it into ordered, overlapping patches, (C) simulate diverse viewpoints by calibrating virtual camera poses, and (D) generate augmented, posed images with aligned depth maps for use in 3D reconstruction.

Clips-to-Clips. (A) We begin by uniformly sampling frames from a video. (B) A pair-wise overlap matrix is computed to measure frame redundancy, with overlap visualized in purple and overlap ratios annotated in red. (C) Low-redundancy keyframes are then selected, and diverse sub-clips are synthesized from them using the Image-to-Clips method.

Example

Example of Image-to-Clips. From one image (left), the method samples overlapping patches and assigns 6-DoF camera paths to craft view-consistent clips (right), turning a single frame into diverse, realistic training sequences across human, indoor, and outdoor scenes.

Example of Clips-to-Clips. Top: Consecutive frames from the original training clip. Middle: Selected Keyframes with corresponding patch selections. Bottom: Synthetic video clips generated from keyframes using the Image-to-Clips method.

Demo

Demonstrates how the Image-to-Clips method samples overlapping patches (colored regions) from a single image
and generates diverse video clips, producing diverse training sequences.

Qualtitative Evaluation

Side-by-side comparison on real-world captured video: standard 3D reconstruction (left) vs. Puzzles-augmented result (right),
showcasing richer structural detail and more complete geometry. Click the arrow button for extra comparison.

Quantitative Evaluation

Table 1. Quantitative comparison on 7Scenes, NRGBD and DTU. Value & relative improvement (Δ) after using Puzzles.
Method w/ Puzzles Data 7Scenes NRGBD DTU
Acc ↓Comp ↓ Acc ↓Comp ↓ Acc ↓Comp ↓
ValueΔ (%)ValueΔ (%) ValueΔ (%)ValueΔ (%) ValueΔ (%)ValueΔ (%)
Spann3R -full 0.03880.0253 0.06860.0315 6.24323.1259
1/10 0.0389-0.260.0248+1.98 0.0753-9.790.0341-8.50 4.9832+20.182.5172+19.47
full 0.0330+14.940.0224+11.46 0.0644+6.000.0291+7.51 5.0004+19.902.5113+19.66
Fast3R -full 0.04120.0275 0.07350.0287 4.29612.0681
1/10 0.0402+2.300.0272+1.09 0.0772-5.110.0295-2.78 3.7174+13.471.8941+8.41
full 0.0342+16.990.0239+13.09 0.0684+6.940.0259+9.75 3.5912+16.411.7379+15.96
SLAM3R -full 0.02910.0245 0.04810.0292 4.38202.4754
1/10 0.0289+0.680.0237+3.26 0.0493-2.490.0313-7.19 3.5980+17.892.0891+15.60
full 0.0264+9.270.0218+11.02 0.0439+8.730.0263+9.93 3.6497+16.712.0762+16.12

To evaluate the impact of Puzzles augmentation, we retrain three video-based 3R-series baselines—Spann3R, SLAM3R, and Fast3R—on unified dataset (which differs from their original training data), using each model's published architecture and training protocol, both with and without Puzzles. Reported metrics may therefore diverge from the papers; our goal is to measure how each model reacts to a new data distribution and augmentation.

BibTeX

Coming soon...