- Published on
Latent Representations for Visual Proprioception in Inexpensive Robots
- Authors
- Name
- Sahara Sheikholeslami
- Name
- Ladislau B\"ol\"oni
- Affiliation
- University of Central Florida
- Affiliation
Robotic manipulation requires explicit or implicit knowledge of the robot’s joint positions. Precise proprioception is standard in high-quality industrial robots but is often unavailable in inexpensive robots operating in unstructured environments. In this paper, we ask: to what extent can a fast, single-pass regression architecture perform visual proprioception from a single external camera image, available even in the simplest manipulation settings? We explore several latent representations, including CNNs, VAEs, ViTs, and bags of uncalibrated fiducial markers, using fine-tuning techniques adapted to the limited data available. We evaluate the achievable accuracy through experiments on an inexpensive 6-DoF robot.