The role of size and binocular information in guiding reaching: Insights from virtual reality and visual form agnosia III (of III)

J. P. Wann, Mark Arwyn Mon-Williams, R. D. McIntosh, M. Smyth, A. D. Milner

    Research output: Contribution to journalArticle

    12 Citations (Scopus)

    Abstract

    Reaching out to grasp an object requires information about the size of the object and the distance between the object and the body. We used a virtual reality system with a control population and a patient with visual form agnosia (DF) in order to explore the use of binocular information and size cues in prehension. The experiments consisted of a perceptual matching task in addition to a prehension task. In the prehension task, control participants modified their reach distance in response to step changes in vergence in the absence of any clear reference for relative disparity. Their reach distance was unaffected by equivalent step changes in size, even though they used this information to modify grasp and showed a size bias in a distance matching task. Notably, DF showed the same pattern of results as the controls but was far more sensitive to step changes in vergence, This finding complements previous research suggesting that DF relies predominantly on vergence information when gauging target distance. The results from the perceptual matching tasks confirmed previous findings suggesting that DF is unable to make use of size information for perceptual matching, including distance comparisons. These data are discussed with regard to the properties of the pathways subserving the two visual cortical processing streams.

    Original languageEnglish
    Pages (from-to)143-150
    Number of pages7
    JournalExperimental Brain Research
    Volume139
    DOIs
    Publication statusPublished - 2001

    Keywords

    • prehension
    • binocular
    • vergence
    • distance perception
    • visual form agnosia
    • human
    • PERCEPTION
    • VERGENCE
    • TIME
    • PREHENSION
    • COLLISION
    • DISTANCE
    • MODEL

    Cite this