Consciousness and metarepresentation

A computational sketch

Axel Cleeremans, Bert Timmermans, Antoine Pasquali

Research output: Contribution to journalArticle

44 Citations (Scopus)

Abstract

When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith’s [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer’s innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487–519] point that knowledge acquired by a connectionist network always remains “knowledge in the network rather than knowledge for the network”. That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input–output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network “observes” the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network’s internal representations become re-representations of the first-order network’s internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own’s internal representations — a geography that is itself learned over time as a result of an agent’s attributing value to the various experiences it enjoys through interaction with itself, the world, and others.
Original languageEnglish
Pages (from-to)1032-1039
Number of pages8
JournalNeural Networks
Volume20
Issue number9
DOIs
Publication statusPublished - Nov 2007

Fingerprint

Cognitive systems
Consciousness
Geography
Hope
Intuition
Aptitude
Fear
Cues
Language
Psychology

Keywords

  • consciousness
  • representation
  • higher-order thought
  • neural networks

Cite this

Consciousness and metarepresentation : A computational sketch. / Cleeremans, Axel; Timmermans, Bert; Pasquali, Antoine.

In: Neural Networks, Vol. 20, No. 9, 11.2007, p. 1032-1039.

Research output: Contribution to journalArticle

Cleeremans, Axel ; Timmermans, Bert ; Pasquali, Antoine. / Consciousness and metarepresentation : A computational sketch. In: Neural Networks. 2007 ; Vol. 20, No. 9. pp. 1032-1039.
@article{e8efb93c343440739c4b60540f1c3b89,
title = "Consciousness and metarepresentation: A computational sketch",
abstract = "When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. G{\"u}zeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith’s [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer’s innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487–519] point that knowledge acquired by a connectionist network always remains “knowledge in the network rather than knowledge for the network”. That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input–output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network “observes” the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network’s internal representations become re-representations of the first-order network’s internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own’s internal representations — a geography that is itself learned over time as a result of an agent’s attributing value to the various experiences it enjoys through interaction with itself, the world, and others.",
keywords = "consciousness, representation, higher-order thought, neural networks",
author = "Axel Cleeremans and Bert Timmermans and Antoine Pasquali",
year = "2007",
month = "11",
doi = "10.1016/j.neunet.2007.09.011",
language = "English",
volume = "20",
pages = "1032--1039",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "9",

}

TY - JOUR

T1 - Consciousness and metarepresentation

T2 - A computational sketch

AU - Cleeremans, Axel

AU - Timmermans, Bert

AU - Pasquali, Antoine

PY - 2007/11

Y1 - 2007/11

N2 - When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith’s [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer’s innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487–519] point that knowledge acquired by a connectionist network always remains “knowledge in the network rather than knowledge for the network”. That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input–output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network “observes” the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network’s internal representations become re-representations of the first-order network’s internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own’s internal representations — a geography that is itself learned over time as a result of an agent’s attributing value to the various experiences it enjoys through interaction with itself, the world, and others.

AB - When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith’s [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer’s innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487–519] point that knowledge acquired by a connectionist network always remains “knowledge in the network rather than knowledge for the network”. That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input–output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network “observes” the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network’s internal representations become re-representations of the first-order network’s internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own’s internal representations — a geography that is itself learned over time as a result of an agent’s attributing value to the various experiences it enjoys through interaction with itself, the world, and others.

KW - consciousness

KW - representation

KW - higher-order thought

KW - neural networks

U2 - 10.1016/j.neunet.2007.09.011

DO - 10.1016/j.neunet.2007.09.011

M3 - Article

VL - 20

SP - 1032

EP - 1039

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 9

ER -