Gardening with a cognitive system



The GARNICS project aims at 3D sensing of plant growth and building perceptual representations for learning the links to actions of a robot gardener. Plants are complex, self-changing systems with increasing complexity over time. Actions performed at plants (like watering), will have strongly delayed effects. Thus, monitoring and controlling plants is a difficult perception-action problem requiring advanced predictive cognitive properties, which so far can only be provided by experienced human gardeners.

Sensing and control of a plants actual properties, i.e. its phenotype, is relevant to e.g. seed production and plant breeders. We address plant sensing and control by combining active vision with appropriate perceptual representations, which are essential for cognitive interactions. Core ingredients for these representations are channel representations, dynamic graphs and cause-effect couples (CECs). Channel representations are a wavelet-like, biologically motivated information representation, which can be generalized coherently using group theory. Using these representations, plant models – represented by dynamic graphs – will be acquired and by interacting with a human gardener the system will be taught the different cause-effect relations resulting from possible treatments. Employing decision making and planning processes via CECs, our robot gardener will then be able to choose from its learned repertoire the appropriate actions for optimal plant growth. This way we will arrive at an adaptive, interactive cognitive system, which will be implemented and tested in an industrially-relevant plant-phenotyping application.