Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model.
We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability.
We address the issue of learning and representing object grasp affordance models.