Invariance Co-Training for Robot Visual Generalization

Invariance Co-Training

We train a 2D vision encoder that generalizes to new camera viewpoints, lighting conditions, and background clutter that leverages diverse synthetic images, large-scale open-source datasets, and videos of static scenes.

Abstract

Despite recent advancements, many large-scale robotic policies still remain sensitive to key sources of observational variation—such as changes in camera perspective, lighting, and the presence of distractor objects. We posit that the limited generalizability of these models arises from the substantial diversity required to robustly cover these quasistatic axes, coupled with the current scarcity of large-scale robotic datasets that exhibit rich variation across them.

In this work, we propose to systematically examine what robots need to generalize across these challenging axes by introducing two key auxiliary tasks—state similarity and invariance to observational perturbations—applied to both demonstration data and static visual data. We then show that via these auxiliary tasks, leveraging both more-expensive robotic demonstration data and less-expensive, visually rich synthetic images generated from non-physics-based simulation (e.g., Unreal Engine) can lead to substantial increases in generalization to unseen camera viewpoints, lighting configurations, and distractor conditions.

Our results demonstrate that co-training on this diverse data improves performance by 18% over existing generative augmentation methods.

Our approach demonstrates robust generalization across diverse visual conditions by leveraging synthetic imagery, open-source datasets, and static scene videos for enhanced robot vision training.

Invariance Co-Training as Contrastive Learning

We propose a contrastive co-training framework where the policy learns representations invariant to observational perturbations (e.g., viewpoint, lighting) while remaining sensitive to task-relevant state and goal changes. By distinguishing between semantically similar and dissimilar (state, goal) pairs across varying observational conditions, the model learns to align representations that matter for control.

Invariance Co-Training as Contrastive Learning

Example Static Simulation Images

The image depicts example simulation images from the simpler env, unreal engine, and libero simulators.

Invariance Co-Training as Contrastive Learning

Generalization to Novel Observation Variations

Experimentally, our invariance co-training approach significantly outperforms baseline Behavioral Cloning, improving average success rates by approximately 40% across key variations. Furthermore, it yields 18\% higher success rates compared to variants relying only on simulation or generative models.

Main Results - Performance comparison across different methods and variations

Quantitative Results: Our Invariance Co-Training approach consistently outperforms baseline methods across camera viewpoint changes, lighting variations, and background clutter scenarios.

Auxiliary viewpoint failure case
Background clutter failure case
Lighting variation failure case

Baseline Behavior Cloning struggles with viewpoint changes, background clutter, and lighting variations

Auxiliary viewpoint success case
Background clutter success case
Lighting variation success case

Our Invariance Co-Training approach successfully handles the same challenging variations

Handheld Camera Generalization

Our method can generalize to new camera perspectives zero-shot in the DROID platform.