It has been claimed that computational models of argumentation provide support for complex decision making activities in part due to the close alignment between their semantics and human intuition. In this paper we assess this claim by means of an experiment: people’s evaluation of formal arguments — presented in plain English — is compared to the conclusions obtained from argumentation semantics. Our results show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases. However, post-hoc analyses show that there are some significant deviations, which appear to arise from implicit knowledge regarding the domains in which evaluation took place. We argue that in order to create argumentation systems, designers must take implicit domain specific knowledge into account.