- Published on
Training Datasets Generation for Machine Learning: Application to Vision Based Navigation
- Authors
- Name
- J\'er\'emy Lebreton
- Name
- Ingo Ahrns
- Name
- Roland Brochard
- Name
- Christoph Haskamp
- Name
- Hans Kr\"uger
- Name
- Matthieu Le Goff
- Name
- Nicolas Menga
- Name
- Nicolas Ollagnier
- Name
- Ralf Regele
- Name
- Francesco Capolupo
- Name
- Massimo Casasco
- Affiliation
- Affiliation
- Unknown
- Affiliation
- Airbus Defence and Space, Toulouse
Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images. To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms. The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms. Two use cases were selected and a robust methodology was developed to validate the datasets including the ground truth. The first use case is in-orbit rendezvous with a man-made object: a mockup of satellite ENVISAT. The second use case is a Lunar landing scenario. Datasets were produced from archival datasets (Chang’e 3), from the laboratory at DLR TRON facility and at Airbus Robotic laboratory, from SurRender software high fidelity image simulator using Model Capture and from Generative Adversarial Networks. The use case definition included the selection of algorithms as benchmark: an AI-based pose estimation algorithm and a dense optical flow algorithm were selected. Eventually it is demonstrated that datasets produced with SurRender and selected laboratory facilities are adequate to train machine learning algorithms.