Learning efficient haptic shape exploration with a rigid tactile sensor array
- PMID: 31896135
- PMCID: PMC6940144
- DOI: 10.1371/journal.pone.0226880
Learning efficient haptic shape exploration with a rigid tactile sensor array
Erratum in
-
Correction: Learning efficient haptic shape exploration with a rigid tactile sensor array.PLoS One. 2020 Feb 27;15(2):e0230054. doi: 10.1371/journal.pone.0230054. eCollection 2020. PLoS One. 2020. PMID: 32109261 Free PMC article.
Abstract
Haptic exploration is a key skill for both robots and humans to discriminate and handle unknown objects or to recognize familiar objects. Its active nature is evident in humans who from early on reliably acquire sophisticated sensory-motor capabilities for active exploratory touch and directed manual exploration that associates surfaces and object properties with their spatial locations. This is in stark contrast to robotics. In this field, the relative lack of good real-world interaction models-along with very restricted sensors and a scarcity of suitable training data to leverage machine learning methods-has so far rendered haptic exploration a largely underdeveloped skill. In robot vision however, deep learning approaches and an abundance of available training data have triggered huge advances. In the present work, we connect recent advances in recurrent models of visual attention with previous insights about the organisation of human haptic search behavior, exploratory procedures and haptic glances for a novel architecture that learns a generative model of haptic exploration in a simulated three-dimensional environment. This environment contains a set of rigid static objects representing a selection of one-dimensional local shape features embedded in a 3D space: an edge, a flat and a convex surface. The proposed algorithm simultaneously optimizes main perception-action loop components: feature extraction, integration of features over time, and the control strategy, while continuously acquiring data online. Inspired by the Recurrent Attention Model, we formalize the target task of haptic object identification in a reinforcement learning framework and reward the learner in the case of success only. We perform a multi-module neural network training, including a feature extractor and a recurrent neural network module aiding pose control for storing and combining sequential sensory data. The resulting haptic meta-controller for the rigid 16 × 16 tactile sensor array moving in a physics-driven simulation environment, called the Haptic Attention Model, performs a sequence of haptic glances, and outputs corresponding force measurements. The resulting method has been successfully tested with four different objects. It achieved results close to 100% while performing object contour exploration that has been optimized for its own sensor morphology.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures
References
-
- Szegedy C, Liu W, Jia Y, Sermanet P, Reed SE, Anguelov D, et al. Going Deeper with Convolutions. CoRR. 2014;abs/1409.4842.
-
- He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. CoRR. 2015;abs/1512.0.
-
- Levine S, Pastor P, Pastor P, Krizhevsky A, Ibarz J, Ibarz J, et al. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research. 2017;37(4-5):421–436. 10.1177/0278364917710318 - DOI
-
- Okamura A, Cutkosky M. Feature Detection for Haptic Exploration with Robotic Fingers. vol. 20; 2001.
-
- Martins R, Ferreira JF, Dias J. Touch attention Bayesian models for robotic active haptic exploration of heterogeneous surfaces. CoRR. 2014;abs/1409.6.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
