In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned planning model. In order to extract the relevant constraints, the human teacher demonstrates a set of tests, e. g. a scene with different objects, and the robot tries to execute the planning model on each test using constrained motion planning. Based on statistics about which constraints failed during the planning process multiple hypotheses about a maximal subset of constraints, which allows to find a solution in all tests, are refined in parallel using an evolutionary algorithm. The algorithm was tested on 7 experiments and two robot systems.